• How Do You Teach an AI Model to Reason? With Humans

    AI models are advancing at a rapid rate and scale.
    But what might they lack thathumans don’t? Common sense: an understanding, developed through real-world experiences, that birds can’t fly backwards, mirrors are reflective and ice melts into water.
    While such principles seem obvious to humans, they must be taught to AI models tasked with accurately answering complex questions and navigating unpredictable physical environments, such as industrial warehouses or roads.
    NVIDIA is tackling this challenge by developing a set of tests to coach AI models on the limitations of the physical world. In other words, to teach AI common sense.
    These tests are used to develop reasoning models such as NVIDIA Cosmos Reason, an open reasoning vision language modelused for physical AI applications that are proficient in generating temporally grounded responses. Cosmos Reason just topped the physical reasoning leaderboard on Hugging Face.
    Cosmos Reason is unique compared with previous VLMs as it’s designed to accelerate physical AI development for fields such as robotics, autonomous vehicles and smart spaces. The model can infer and reason through unprecedented scenarios using physical common-sense knowledge.
    For models to understand complex environments — including industrial spaces and laboratories — they must start small. For example, in the test depicted below, the Cosmos Reason model is tasked with answering a multiple-choice question about the relative motion in the video:


    Example from Cosmos Reason evaluation dataset
    What Does Reasoning Look Like for an AI Model? 
    To develop their reasoning capabilities, NVIDIA models are being taught physical common sense about the real world via reinforcement learning.
    For example, robots don’t intuitively know which way is left, right, up or down. They’re taught these spatial-temporal limitations through training. AI-powered robots used in safety testing, such as vehicle crash testing, must be taught to be aware of how their physical forms interact with their surroundings.
    Without embedding common sense into the training of these robots, issues can arise in deployment.
    “Without basic knowledge about the physical world, a robot may fall down or accidentally break something, causing danger to the surrounding people and environment,” said Yin Cui, a Cosmos Reason research scientist at NVIDIA.
    Distilling human common sense about the physical world into models is how NVIDIA is bringing about the next generation of AI.
    Enter the NVIDIA data factory team: a group of global analysts who come from various backgrounds — including bioengineering, business and linguistics. They’re working to develop, analyze and compile hundreds of thousands of data units that will be used to train generative AI models on how to reason.
    The Data Curation Process
    One of the NVIDIA data factory team’s projects focuses on the development of world foundation models for physical AI applications. These virtual environments create deep learning neural networks that are safer and more effective for training reasoning models, based on simulated domains.
    It all starts with an NVIDIA annotation group that creates question-and-answer pairs based on video data. These videos are all from the real world and can include any type of footage, whether depicting chickens walking around in their coop or cars driving on a rural road.
    For example, an annotator might ask about the video below: “The person uses which hand to cut the spaghetti?”

    Example from Cosmos Reason evaluation dataset
    The annotators then come up with four multiple choice answers labeled A, B, C and D. The model is fed the data and has to reason and choose the correct answer.
    “We’re basically coming up with a test for the model,” said Cui. “All of our questions are multiple choice, like what students would see on a school exam.”
    These question-and-answer pairs are then quality checked by NVIDIA analysts, such as Michelle Li.
    Li has a background in public health and data analytics, which allows her to look at the broader purpose of the data she analyzes.
    “For physical AI, we have a specific goal of wanting to train models on understanding the physical world, which helps me think about the bigger picture when I’m looking at the Q&A pairs and the types of questions that are being presented,” Li said. “I ask myself, do the Q&A pairs that I’m looking at align with our objectives for the guidelines that we have for the project?”
    After this, the data is reviewed by the data factory leads of the project, who make sure it’s up to quality standards and ready to be sent to the Cosmos Reason research team. The scientists then feed the hundred thousands of data units — in this case the Q&A pairs — to the model, training it with reinforcement learning on the bounds and limitations of the physical world.
    What Are the Applications of Reasoning AI? 
    Reasoning models are exceptional because they can make sense of their temporal space as well as predict outcomes. They can analyze a situation, come up with a thought web of probable outcomes and infer the most likely scenario.
    Simply put, reasoning AI demonstrates humanlike thinking. It shows its work, giving the user insight into the logic behind its responses.
    Users can ask these models to analyze a video such as of two cars driving on a road. When asked a question like, “What would happen if the cars were driving toward each other on the same lane?” the model can reason and determine the most probable outcome of the proposed scenario — for example, a car crash.
    “We’re building a pioneering reasoning model focused on physical AI,” said Tsung-Yi Lin, a principal research scientist on the Cosmos Reason team at NVIDIA.
    The data factory team’s ability to produce high-quality data will be imperative for driving the development of intelligent autonomous agents and physical AI systems that can safely interact with the real world as NVIDIA reasoning model innovation continues.
    Preview NVDIA Cosmos-Reason1 or download the model on Hugging Face and GitHub.
    #how #you #teach #model #reason
    How Do You Teach an AI Model to Reason? With Humans
    AI models are advancing at a rapid rate and scale. But what might they lack thathumans don’t? Common sense: an understanding, developed through real-world experiences, that birds can’t fly backwards, mirrors are reflective and ice melts into water. While such principles seem obvious to humans, they must be taught to AI models tasked with accurately answering complex questions and navigating unpredictable physical environments, such as industrial warehouses or roads. NVIDIA is tackling this challenge by developing a set of tests to coach AI models on the limitations of the physical world. In other words, to teach AI common sense. These tests are used to develop reasoning models such as NVIDIA Cosmos Reason, an open reasoning vision language modelused for physical AI applications that are proficient in generating temporally grounded responses. Cosmos Reason just topped the physical reasoning leaderboard on Hugging Face. Cosmos Reason is unique compared with previous VLMs as it’s designed to accelerate physical AI development for fields such as robotics, autonomous vehicles and smart spaces. The model can infer and reason through unprecedented scenarios using physical common-sense knowledge. For models to understand complex environments — including industrial spaces and laboratories — they must start small. For example, in the test depicted below, the Cosmos Reason model is tasked with answering a multiple-choice question about the relative motion in the video: Example from Cosmos Reason evaluation dataset What Does Reasoning Look Like for an AI Model?  To develop their reasoning capabilities, NVIDIA models are being taught physical common sense about the real world via reinforcement learning. For example, robots don’t intuitively know which way is left, right, up or down. They’re taught these spatial-temporal limitations through training. AI-powered robots used in safety testing, such as vehicle crash testing, must be taught to be aware of how their physical forms interact with their surroundings. Without embedding common sense into the training of these robots, issues can arise in deployment. “Without basic knowledge about the physical world, a robot may fall down or accidentally break something, causing danger to the surrounding people and environment,” said Yin Cui, a Cosmos Reason research scientist at NVIDIA. Distilling human common sense about the physical world into models is how NVIDIA is bringing about the next generation of AI. Enter the NVIDIA data factory team: a group of global analysts who come from various backgrounds — including bioengineering, business and linguistics. They’re working to develop, analyze and compile hundreds of thousands of data units that will be used to train generative AI models on how to reason. The Data Curation Process One of the NVIDIA data factory team’s projects focuses on the development of world foundation models for physical AI applications. These virtual environments create deep learning neural networks that are safer and more effective for training reasoning models, based on simulated domains. It all starts with an NVIDIA annotation group that creates question-and-answer pairs based on video data. These videos are all from the real world and can include any type of footage, whether depicting chickens walking around in their coop or cars driving on a rural road. For example, an annotator might ask about the video below: “The person uses which hand to cut the spaghetti?” Example from Cosmos Reason evaluation dataset The annotators then come up with four multiple choice answers labeled A, B, C and D. The model is fed the data and has to reason and choose the correct answer. “We’re basically coming up with a test for the model,” said Cui. “All of our questions are multiple choice, like what students would see on a school exam.” These question-and-answer pairs are then quality checked by NVIDIA analysts, such as Michelle Li. Li has a background in public health and data analytics, which allows her to look at the broader purpose of the data she analyzes. “For physical AI, we have a specific goal of wanting to train models on understanding the physical world, which helps me think about the bigger picture when I’m looking at the Q&A pairs and the types of questions that are being presented,” Li said. “I ask myself, do the Q&A pairs that I’m looking at align with our objectives for the guidelines that we have for the project?” After this, the data is reviewed by the data factory leads of the project, who make sure it’s up to quality standards and ready to be sent to the Cosmos Reason research team. The scientists then feed the hundred thousands of data units — in this case the Q&A pairs — to the model, training it with reinforcement learning on the bounds and limitations of the physical world. What Are the Applications of Reasoning AI?  Reasoning models are exceptional because they can make sense of their temporal space as well as predict outcomes. They can analyze a situation, come up with a thought web of probable outcomes and infer the most likely scenario. Simply put, reasoning AI demonstrates humanlike thinking. It shows its work, giving the user insight into the logic behind its responses. Users can ask these models to analyze a video such as of two cars driving on a road. When asked a question like, “What would happen if the cars were driving toward each other on the same lane?” the model can reason and determine the most probable outcome of the proposed scenario — for example, a car crash. “We’re building a pioneering reasoning model focused on physical AI,” said Tsung-Yi Lin, a principal research scientist on the Cosmos Reason team at NVIDIA. The data factory team’s ability to produce high-quality data will be imperative for driving the development of intelligent autonomous agents and physical AI systems that can safely interact with the real world as NVIDIA reasoning model innovation continues. Preview NVDIA Cosmos-Reason1 or download the model on Hugging Face and GitHub. #how #you #teach #model #reason
    How Do You Teach an AI Model to Reason? With Humans
    blogs.nvidia.com
    AI models are advancing at a rapid rate and scale. But what might they lack that (most) humans don’t? Common sense: an understanding, developed through real-world experiences, that birds can’t fly backwards, mirrors are reflective and ice melts into water. While such principles seem obvious to humans, they must be taught to AI models tasked with accurately answering complex questions and navigating unpredictable physical environments, such as industrial warehouses or roads. NVIDIA is tackling this challenge by developing a set of tests to coach AI models on the limitations of the physical world. In other words, to teach AI common sense. These tests are used to develop reasoning models such as NVIDIA Cosmos Reason, an open reasoning vision language model (VLM) used for physical AI applications that are proficient in generating temporally grounded responses. Cosmos Reason just topped the physical reasoning leaderboard on Hugging Face. Cosmos Reason is unique compared with previous VLMs as it’s designed to accelerate physical AI development for fields such as robotics, autonomous vehicles and smart spaces. The model can infer and reason through unprecedented scenarios using physical common-sense knowledge. For models to understand complex environments — including industrial spaces and laboratories — they must start small. For example, in the test depicted below, the Cosmos Reason model is tasked with answering a multiple-choice question about the relative motion in the video: https://blogs.nvidia.com/wp-content/uploads/2025/08/ModelReasoning_DrivingExample.mp4 Example from Cosmos Reason evaluation dataset What Does Reasoning Look Like for an AI Model?  To develop their reasoning capabilities, NVIDIA models are being taught physical common sense about the real world via reinforcement learning. For example, robots don’t intuitively know which way is left, right, up or down. They’re taught these spatial-temporal limitations through training. AI-powered robots used in safety testing, such as vehicle crash testing, must be taught to be aware of how their physical forms interact with their surroundings. Without embedding common sense into the training of these robots, issues can arise in deployment. “Without basic knowledge about the physical world, a robot may fall down or accidentally break something, causing danger to the surrounding people and environment,” said Yin Cui, a Cosmos Reason research scientist at NVIDIA. Distilling human common sense about the physical world into models is how NVIDIA is bringing about the next generation of AI. Enter the NVIDIA data factory team: a group of global analysts who come from various backgrounds — including bioengineering, business and linguistics. They’re working to develop, analyze and compile hundreds of thousands of data units that will be used to train generative AI models on how to reason. The Data Curation Process One of the NVIDIA data factory team’s projects focuses on the development of world foundation models for physical AI applications. These virtual environments create deep learning neural networks that are safer and more effective for training reasoning models, based on simulated domains. It all starts with an NVIDIA annotation group that creates question-and-answer pairs based on video data. These videos are all from the real world and can include any type of footage, whether depicting chickens walking around in their coop or cars driving on a rural road. For example, an annotator might ask about the video below: “The person uses which hand to cut the spaghetti?” https://blogs.nvidia.com/wp-content/uploads/2025/08/ModelReasoning_SpaghettiExample.mp4 Example from Cosmos Reason evaluation dataset The annotators then come up with four multiple choice answers labeled A, B, C and D. The model is fed the data and has to reason and choose the correct answer. “We’re basically coming up with a test for the model,” said Cui. “All of our questions are multiple choice, like what students would see on a school exam.” These question-and-answer pairs are then quality checked by NVIDIA analysts, such as Michelle Li. Li has a background in public health and data analytics, which allows her to look at the broader purpose of the data she analyzes. “For physical AI, we have a specific goal of wanting to train models on understanding the physical world, which helps me think about the bigger picture when I’m looking at the Q&A pairs and the types of questions that are being presented,” Li said. “I ask myself, do the Q&A pairs that I’m looking at align with our objectives for the guidelines that we have for the project?” After this, the data is reviewed by the data factory leads of the project, who make sure it’s up to quality standards and ready to be sent to the Cosmos Reason research team. The scientists then feed the hundred thousands of data units — in this case the Q&A pairs — to the model, training it with reinforcement learning on the bounds and limitations of the physical world. What Are the Applications of Reasoning AI?  Reasoning models are exceptional because they can make sense of their temporal space as well as predict outcomes. They can analyze a situation, come up with a thought web of probable outcomes and infer the most likely scenario. Simply put, reasoning AI demonstrates humanlike thinking. It shows its work, giving the user insight into the logic behind its responses. Users can ask these models to analyze a video such as of two cars driving on a road. When asked a question like, “What would happen if the cars were driving toward each other on the same lane?” the model can reason and determine the most probable outcome of the proposed scenario — for example, a car crash. “We’re building a pioneering reasoning model focused on physical AI,” said Tsung-Yi Lin, a principal research scientist on the Cosmos Reason team at NVIDIA. The data factory team’s ability to produce high-quality data will be imperative for driving the development of intelligent autonomous agents and physical AI systems that can safely interact with the real world as NVIDIA reasoning model innovation continues. Preview NVDIA Cosmos-Reason1 or download the model on Hugging Face and GitHub.
    Like
    Wow
    Love
    Sad
    Angry
    83
    · 2 Comments ·0 Shares
  • Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum

    IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone  Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers,  small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins
    #designing #atmospheric #wwi #plane #crash
    Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum
    IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone  Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers,  small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins #designing #atmospheric #wwi #plane #crash
    Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum
    80.lv
    IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone  Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers,  small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins
    Like
    Love
    Wow
    Sad
    Angry
    469
    · 2 Comments ·0 Shares
  • Halo's Assault Rifle Is Causing Helldivers 2 To Crash

    Helldivers 2 is putting up huge numbers at the moment, partially thanks to a major collaboration with Halo--but one of the iconic Halo weapons in the ODST Warbond is causing the game to crash. Here's what we know about the bug and how to avoid it.The bug has been detailed under known issues in Helldivers 2's Discord, and concerns the ODST MA5C Assault Rifle. "When on the bridge, if all players equip this weapon from the armory and then the host leaves it will cause your game to crash," the bug report reads. The crash is currently impacting players on all platforms, and Arrowhead is investigating a fix.For now, thankfully the crash is not too difficult to get around. To avoid any possibility of this crash ocurring, Arrowhead recommends equipping the ODST from the loadout instead of the armory.Continue Reading at GameSpot
    #halo039s #assault #rifle #causing #helldivers
    Halo's Assault Rifle Is Causing Helldivers 2 To Crash
    Helldivers 2 is putting up huge numbers at the moment, partially thanks to a major collaboration with Halo--but one of the iconic Halo weapons in the ODST Warbond is causing the game to crash. Here's what we know about the bug and how to avoid it.The bug has been detailed under known issues in Helldivers 2's Discord, and concerns the ODST MA5C Assault Rifle. "When on the bridge, if all players equip this weapon from the armory and then the host leaves it will cause your game to crash," the bug report reads. The crash is currently impacting players on all platforms, and Arrowhead is investigating a fix.For now, thankfully the crash is not too difficult to get around. To avoid any possibility of this crash ocurring, Arrowhead recommends equipping the ODST from the loadout instead of the armory.Continue Reading at GameSpot #halo039s #assault #rifle #causing #helldivers
    Halo's Assault Rifle Is Causing Helldivers 2 To Crash
    www.gamespot.com
    Helldivers 2 is putting up huge numbers at the moment, partially thanks to a major collaboration with Halo--but one of the iconic Halo weapons in the ODST Warbond is causing the game to crash. Here's what we know about the bug and how to avoid it.The bug has been detailed under known issues in Helldivers 2's Discord, and concerns the ODST MA5C Assault Rifle. "When on the bridge, if all players equip this weapon from the armory and then the host leaves it will cause your game to crash," the bug report reads. The crash is currently impacting players on all platforms, and Arrowhead is investigating a fix.For now, thankfully the crash is not too difficult to get around. To avoid any possibility of this crash ocurring, Arrowhead recommends equipping the ODST from the loadout instead of the armory.Continue Reading at GameSpot
    Like
    Love
    Wow
    Sad
    Angry
    936
    · 2 Comments ·0 Shares
  • "كيما قال واحد: 'التسرع في الحكم يقدي لأفكار مغلوطة'..."

    آخر تدوينة قرأتها على موقع Fortune جابتني لفكرة عميقة بشاكل : العنوان فيها "‘It’s almost tragic’: Bubble or not, the AI backlash is validating what one researcher and critic has been saying for years". يتحدثوا فيها على Gary Marcus وكيفاش قيمت الذكاء الاصطناعي تذكره بشخصية Wile E Coyote، ويقول "احنا رانا طايحين من الجرف."

    الفكرة هنا تتعلق بمخاطر المبالغة في تقييم التكنولوجيا، وخاصة الذكاء الاصطناعي. بصراحة، هذا الموضوع يخليني نتفكر في كيفاش كنا نعيش في عالم مليء بالضغوطات والتوقعات الكبيرة، وإحنا نشوف التكنولوجيا تتطور بسرعة.

    يبدو أن الوقت جاء للشوفة بواقعية ونعرفوا وين رانا رايحين، وخصوصاً أن رغم كل التطورات، مازال الإنسانية هي الأهم.

    https://fortune.com/2025/08/24/is-ai-a-bubble-market-crash-gary-marc
    💡 "كيما قال واحد: 'التسرع في الحكم يقدي لأفكار مغلوطة'..." آخر تدوينة قرأتها على موقع Fortune جابتني لفكرة عميقة بشاكل : العنوان فيها "‘It’s almost tragic’: Bubble or not, the AI backlash is validating what one researcher and critic has been saying for years". يتحدثوا فيها على Gary Marcus وكيفاش قيمت الذكاء الاصطناعي تذكره بشخصية Wile E Coyote، ويقول "احنا رانا طايحين من الجرف." الفكرة هنا تتعلق بمخاطر المبالغة في تقييم التكنولوجيا، وخاصة الذكاء الاصطناعي. بصراحة، هذا الموضوع يخليني نتفكر في كيفاش كنا نعيش في عالم مليء بالضغوطات والتوقعات الكبيرة، وإحنا نشوف التكنولوجيا تتطور بسرعة. يبدو أن الوقت جاء للشوفة بواقعية ونعرفوا وين رانا رايحين، وخصوصاً أن رغم كل التطورات، مازال الإنسانية هي الأهم. https://fortune.com/2025/08/24/is-ai-a-bubble-market-crash-gary-marc
    fortune.com
    Gary Marcus told Fortune that AI valuations remind him of Wile E Coyote. "We are off the cliff."
    Like
    Love
    Wow
    Sad
    Angry
    1K
    · 1 Comments ·0 Shares
  • وش راكم تعرفوا على رائحة ويلفورد برملي؟

    في المقال الجديد "RoundUp 021 - The Smell Of Wilford Brimley"، نرجعوا بالزمن شوي مع موضوع مهم وهو "Video Game Crash". المقطع فيه تفاصيل مثيرة، من Hardware Flashback إلى أفضل عشر ألعاب Gameboy!

    أحببت الفقرة اللي تتحدث على Guinness Gaming Records، حيث كان عندي تجارب ممتعة مع أصدقائي في المنافسات. والشيء المثير هو كيف الألعاب القديمة لحد الآن تترك بصمة في قلوبنا.

    المقال هذا فرصة تحفزنا نفكروا في كيف التكنولوجيا والألعاب تطورت، وكيف في كل مرحلة كنا نعيش تجارب جديدة.

    رابط المقال:
    https://www.retrogamingroundup.com/shownotes/2010/roundup021_2010.07.php

    #ألعاب #RetroGaming #VideoGames #نوستالجيا #GamingHistory
    وش راكم تعرفوا على رائحة ويلفورد برملي؟ 🤔 في المقال الجديد "RoundUp 021 - The Smell Of Wilford Brimley"، نرجعوا بالزمن شوي مع موضوع مهم وهو "Video Game Crash". المقطع فيه تفاصيل مثيرة، من Hardware Flashback إلى أفضل عشر ألعاب Gameboy! 📼🎮 أحببت الفقرة اللي تتحدث على Guinness Gaming Records، حيث كان عندي تجارب ممتعة مع أصدقائي في المنافسات. والشيء المثير هو كيف الألعاب القديمة لحد الآن تترك بصمة في قلوبنا. المقال هذا فرصة تحفزنا نفكروا في كيف التكنولوجيا والألعاب تطورت، وكيف في كل مرحلة كنا نعيش تجارب جديدة. رابط المقال: https://www.retrogamingroundup.com/shownotes/2010/roundup021_2010.07.php #ألعاب #RetroGaming #VideoGames #نوستالجيا #GamingHistory
    www.retrogamingroundup.com
    The Video Game Crash Part 3 - (00:00) Hardware Flashback - (12:13) Mike'd Up - (27:28) Gorgar Speaks - (52:14) Guinness Gaming Records - (55:31) Top Ten Gameboy Games - (56:45) Tech Questions - (183:08) Gaming Trivia - (194:46)
    Like
    Love
    Wow
    Sad
    Angry
    392
    · 1 Comments ·0 Shares
  • أحياناً، التكنولوجيا يكون عندها ثمنها الخاص.

    اليوم نتكلموا على Tesla، الشركة اللي معروفة بابتكاراتها في مجال القيادة الذاتية. فـ NHTSA بدات تحقيق على Tesla بسبب تقارير غير دقيقة حول حوادث نظام Autopilot و FSD. السبب؟ الشركة تأخرت في الإبلاغ على حوادث معينة، بالرغم من أن سياراتها تتوفر على تكنولوجيا تسجل البيانات تلقائياً بعد الحادث.

    شخصياً، هذا الموضوع يخليني نفكر في مدى الثقة اللي نحطوها في التكنولوجيا. كيما نقولو: "لا يوجد شيء كامل". وخاصّة في مجالات السلامة، كل تفاصيل صغيرة تقدر تكون حاسمة.

    بالمختصر، هذا التحقيق يفتح عيوننا على حقيقة التكنولوجيا وشفافيتها، ويمكن يخلينا نفكروا أكثر في المخاطر المحتملة.

    https://www.engadget.com/transportation/evs/feds-investigate-tesla-over-inaccurate-autopilot-and-fsd-crash-reports-175837772.html?src=rss

    #تكنولوجيا #Tesla #سلامة_الطرق #DriveSmart #Aut
    أحياناً، التكنولوجيا يكون عندها ثمنها الخاص. 🤔🚗 اليوم نتكلموا على Tesla، الشركة اللي معروفة بابتكاراتها في مجال القيادة الذاتية. فـ NHTSA بدات تحقيق على Tesla بسبب تقارير غير دقيقة حول حوادث نظام Autopilot و FSD. السبب؟ الشركة تأخرت في الإبلاغ على حوادث معينة، بالرغم من أن سياراتها تتوفر على تكنولوجيا تسجل البيانات تلقائياً بعد الحادث. شخصياً، هذا الموضوع يخليني نفكر في مدى الثقة اللي نحطوها في التكنولوجيا. كيما نقولو: "لا يوجد شيء كامل". وخاصّة في مجالات السلامة، كل تفاصيل صغيرة تقدر تكون حاسمة. بالمختصر، هذا التحقيق يفتح عيوننا على حقيقة التكنولوجيا وشفافيتها، ويمكن يخلينا نفكروا أكثر في المخاطر المحتملة. https://www.engadget.com/transportation/evs/feds-investigate-tesla-over-inaccurate-autopilot-and-fsd-crash-reports-175837772.html?src=rss #تكنولوجيا #Tesla #سلامة_الطرق #DriveSmart #Aut
    Feds investigate Tesla over inaccurate autopilot and FSD crash reports
    www.engadget.com
    The National Highway Traffic Safety Administration (NHTSA) just announced an investigation into Tesla regarding its Autopilot and Full Self-Driving (FSD) systems, according to a report by Electrek. The road safety regulator says the probe involves in
    Like
    Love
    Wow
    Angry
    Sad
    533
    · 1 Comments ·0 Shares
  • منحبوش الهم في الطرقات، بصح تيسلا دايرة شوية بلانك!

    المقال الجديد يتحدث عن كيفية عدم التزام تيسلا بتقارير الحوادث المتعلقة بتقنية القيادة الذاتية. يعني، الكاميو تاعهم "فيم جايتي" يتأخر في الإبلاغ للحكومة، وهاد الشي مقلق. الهيئة الوطنية للسلامة على الطرقات تتطلب منهم الإبلاغ عن الحوادث خلال 1 إلى 5 أيام، لكن تيسلا كانت تتأخر لشهور.

    بصراحة، كيفاش نقدروا نتكلوا على التكنولوجيا إذا كانت التقارير جاية متأخرة؟ كيما نقول، "اللي يجي متأخر، يجي غير بباركود!"

    المهم، لازم نكونوا واعيين لهذا الموضوع، لأن السلامة أول شيء.

    https://www.theverge.com/tesla/763603/tesla-autopilot-fsd-crash-report-delay-nhtsa

    #تيسلا #سلامة_الطرق #Autopilot #فكاهة #القيادة_الذاتية
    🚗💨 منحبوش الهم في الطرقات، بصح تيسلا دايرة شوية بلانك! المقال الجديد يتحدث عن كيفية عدم التزام تيسلا بتقارير الحوادث المتعلقة بتقنية القيادة الذاتية. يعني، الكاميو تاعهم "فيم جايتي" يتأخر في الإبلاغ للحكومة، وهاد الشي مقلق. الهيئة الوطنية للسلامة على الطرقات تتطلب منهم الإبلاغ عن الحوادث خلال 1 إلى 5 أيام، لكن تيسلا كانت تتأخر لشهور. بصراحة، كيفاش نقدروا نتكلوا على التكنولوجيا إذا كانت التقارير جاية متأخرة؟ كيما نقول، "اللي يجي متأخر، يجي غير بباركود!" 😂 المهم، لازم نكونوا واعيين لهذا الموضوع، لأن السلامة أول شيء. https://www.theverge.com/tesla/763603/tesla-autopilot-fsd-crash-report-delay-nhtsa #تيسلا #سلامة_الطرق #Autopilot #فكاهة #القيادة_الذاتية
    www.theverge.com
    Tesla is under investigation for failing to report crashes involving its partially autonomous driving technology in a timely manner. The National Highway Traffic Safety Administration requires automakers to report crashes involving advanced driver as
    1 Comments ·0 Shares
  • يا جماعة، عندي لكم قصة تلمس القلب!

    المقال الجديد يتحدث عن "The air crash and the underdogs - a triumph for a lost generation". حكاية زامبيا وفريقها القاوي اللي بعد 19 سنة، وجدت طريقها للنجاح بعد مأساة فاجعة في ليبرفيل. كيفاش فريق كسب قلوب الناس رغم الخسارات، وداخل في تاريخه ليش اسمه الأسطورة! ⚽️

    هذا الموضوع يذكرني بأيام الطفولة، كيف كنا نؤمن بالمعجزات، حتى لو كنت تحت الضغط. كل واحد عايش تجربة صعبة يقدر يخرج منها أقوى وأفضل.

    خلينا نفكر في كيفاش الأمل والتضحية يقدروا يبدلو مجرى الأقدار.

    https://www.bbc.com/sport/football/articles/czrgm6grxvlo?at_medium=RSS&at_campaign=rss

    #فريق_زامبيا #النجاح #الرياضة #الأمل #مآسي
    يا جماعة، عندي لكم قصة تلمس القلب! 🕊️💔 المقال الجديد يتحدث عن "The air crash and the underdogs - a triumph for a lost generation". حكاية زامبيا وفريقها القاوي اللي بعد 19 سنة، وجدت طريقها للنجاح بعد مأساة فاجعة في ليبرفيل. كيفاش فريق كسب قلوب الناس رغم الخسارات، وداخل في تاريخه ليش اسمه الأسطورة! 🌟⚽️ هذا الموضوع يذكرني بأيام الطفولة، كيف كنا نؤمن بالمعجزات، حتى لو كنت تحت الضغط. كل واحد عايش تجربة صعبة يقدر يخرج منها أقوى وأفضل. خلينا نفكر في كيفاش الأمل والتضحية يقدروا يبدلو مجرى الأقدار. 💪 https://www.bbc.com/sport/football/articles/czrgm6grxvlo?at_medium=RSS&at_campaign=rss #فريق_زامبيا #النجاح #الرياضة #الأمل #مآسي
    www.bbc.com
    Nineteen years and 10 miles separated Zambia's football team from a lost, golden generation on a fateful night in Libreville.
    1 Comments ·0 Shares
  • يا جماعة، واش رأيكم في الأخبار الأخيرة على تسلا؟

    سمعتوا بلي تسلا تقدر تدفع 243 مليون دولار بعد حادثة مميتة صار فيها نظام الأوتوبيلوت متورط. المقال يشرح كيفاش المحكمة حكمت ضدهم بعد ما تأكدوا بلي كان فيه تقصير في الأمان! كيفاش يمكن لنظام يعتمد عليه الناس بشكل كبير يكون فيه هاد الشي؟

    في رأيي، التكنولوجيا لازم تكون في خدمة الإنسان، موش العكس. شخصيًا، عندي شوية الخوف من القيادة الذاتية، خاصة مع الحوادث اللي تصرا. واش عنكم؟ تقدروا تثقوا في هالأنظمة، ولا تحسوا بلي لازمكم تحتفظوا بالتحكم؟

    بلا ما ننسى، التكنولوجيا تحتاج للمسؤولية!

    https://forbesmiddleeast.com/industry/automotive-and-ev/tesla-ordered-to-pay-$243m-in-fatal-autopilot-crash-verdict

    #تسلا #أمان_القيادة #تكنولوجيا #Autopilot #Responsabilité
    يا جماعة، واش رأيكم في الأخبار الأخيرة على تسلا؟ 🚗💨 سمعتوا بلي تسلا تقدر تدفع 243 مليون دولار بعد حادثة مميتة صار فيها نظام الأوتوبيلوت متورط. المقال يشرح كيفاش المحكمة حكمت ضدهم بعد ما تأكدوا بلي كان فيه تقصير في الأمان! كيفاش يمكن لنظام يعتمد عليه الناس بشكل كبير يكون فيه هاد الشي؟ في رأيي، التكنولوجيا لازم تكون في خدمة الإنسان، موش العكس. شخصيًا، عندي شوية الخوف من القيادة الذاتية، خاصة مع الحوادث اللي تصرا. واش عنكم؟ تقدروا تثقوا في هالأنظمة، ولا تحسوا بلي لازمكم تحتفظوا بالتحكم؟ بلا ما ننسى، التكنولوجيا تحتاج للمسؤولية! https://forbesmiddleeast.com/industry/automotive-and-ev/tesla-ordered-to-pay-$243m-in-fatal-autopilot-crash-verdict #تسلا #أمان_القيادة #تكنولوجيا #Autopilot #Responsabilité
    forbesmiddleeast.com
    Tesla Ordered To Pay $243M In Fatal Autopilot Crash Verdict
    1 Comments ·0 Shares
  • يا جماعة، كي كنت نحوس في الأخبار، لقيت موضوع يخليني نفكر مليح في وضع السوق الحالي.

    فهمت أن الوضع راهو شبيه بزاف باللي كان في 2007، وفي التسعينيات، وين كانت الفايدة طالع وماكاش جاج باه تطيح الأسهم. البازار كامل تحت سيطرة 50 سهم كبار، والناس بداو يهدروا على نفس الأجواء اللي شهدناها قبل الأزمات الكبيرة. 

    من تجربتي، كي كنت نشري في الأسهم، حسيت دايمًا بلي لازم نكون واعي للمخاطر. كل واحد عندو قصته، وكي تشوف قفزات الأسعار، تتنجم تحس بروحك غارق في حماس، لكن لازم نتذكرو الأسباب اللي قدّات لأزمات قبل.

    المهم، كل واحد فينا لازم يكون واعي ويفكر مليح في قراراته المالية، بصح ما ننسوش أنه التاريخ يعاود نفسه.

    https://fortune.com/2025/08/19/will-market-crash-recession-ghosts-of-2007-nifty-50-dotcom-bubble/
    يا جماعة، كي كنت نحوس في الأخبار، لقيت موضوع يخليني نفكر مليح في وضع السوق الحالي. فهمت أن الوضع راهو شبيه بزاف باللي كان في 2007، وفي التسعينيات، وين كانت الفايدة طالع وماكاش جاج باه تطيح الأسهم. البازار كامل تحت سيطرة 50 سهم كبار، والناس بداو يهدروا على نفس الأجواء اللي شهدناها قبل الأزمات الكبيرة.  من تجربتي، كي كنت نشري في الأسهم، حسيت دايمًا بلي لازم نكون واعي للمخاطر. كل واحد عندو قصته، وكي تشوف قفزات الأسعار، تتنجم تحس بروحك غارق في حماس، لكن لازم نتذكرو الأسباب اللي قدّات لأزمات قبل. المهم، كل واحد فينا لازم يكون واعي ويفكر مليح في قراراته المالية، بصح ما ننسوش أنه التاريخ يعاود نفسه. https://fortune.com/2025/08/19/will-market-crash-recession-ghosts-of-2007-nifty-50-dotcom-bubble/
    fortune.com
    Conditions are like they were in 2007, or in the late 1990s, depending on how you look at the Fed raising rates during rising inflation, and the top 50 stocks dominating.
    1 Comments ·0 Shares
  • شفتوا بوبزي؟ عادو رجع في عالم اللعب!

    المقال الجديد يتكلم عن "Bubsy 3D"، هذي اللعبة اللي كانت تعتبر من بين الأسوأ في تاريخ بلاي ستيشن 1. طوّلوا علينا من بعد ما شافوا "Crash Bandicoot" في E3 وخرجوا برأسهم تحت. ولكن الآن، تقدروا تجربوها على PS5 مع مغامرات بوبزي الأخرى.

    شخصياً، عندي ذكريات غريبة مع هذي اللعبة، كانت تجربة مش سهلة لكن فيها شوية نوستالجيا. أحيانا نحتاجوا نرجع للذكريات القديمة حتى لو كانت موش مثالية.

    لازم نكونوا دايماً مفتوحين لأفكار جديدة، حتى لو كانت قديمة.

    https://www.pushsquare.com/news/2025/08/make-up-your-own-mind-about-critically-panned-ps1-platformer-bubsy-3d-from-9th-september-on-ps5

    #بوبزي #Bubsy3D #PS5 #ألعاب_قديمة #Nostalgia
    شفتوا بوبزي؟ عادو رجع في عالم اللعب! 😄 المقال الجديد يتكلم عن "Bubsy 3D"، هذي اللعبة اللي كانت تعتبر من بين الأسوأ في تاريخ بلاي ستيشن 1. طوّلوا علينا من بعد ما شافوا "Crash Bandicoot" في E3 وخرجوا برأسهم تحت. ولكن الآن، تقدروا تجربوها على PS5 مع مغامرات بوبزي الأخرى. شخصياً، عندي ذكريات غريبة مع هذي اللعبة، كانت تجربة مش سهلة لكن فيها شوية نوستالجيا. أحيانا نحتاجوا نرجع للذكريات القديمة حتى لو كانت موش مثالية. لازم نكونوا دايماً مفتوحين لأفكار جديدة، حتى لو كانت قديمة. https://www.pushsquare.com/news/2025/08/make-up-your-own-mind-about-critically-panned-ps1-platformer-bubsy-3d-from-9th-september-on-ps5 #بوبزي #Bubsy3D #PS5 #ألعاب_قديمة #Nostalgia
    www.pushsquare.com
    The bobcat is back.Bubsy 3D, made by the developer that would go on to create Syphon Filter and Days Gone, was widely regarded as one of the worst PS1 platformers of its era. The team notoriously showed up to E3 with its head in its hands when it saw
    1 Comments ·0 Shares
  • سلام يا جماعة!

    اليوم جبتلكم فيديو رهيب من GDC 2025، يتكلم على Unreal Engine بطريقة سهلة وشيقة! الفيديو يتناول "Crash Course for New Unreal Engine Developers"، وين يشرحلكم الأساسيات وكل ما تحتاجوه قبل ما تبداو في هذا العالم المعقد (بس ما تخافوش، كل شيء راح يكون واضح كيما الما!).

    شخصياً، من بعد ما جربت Unreal، حسيت روحي كأني وليت بطل في لعبة فيديو، لكن في الحقيقة كنت غارق في الدروس! هذا الفيديو رح يساعدكم تتفادوا هذي الغرقات ويخليكم على الطريق الصحيح.

    شوفوا الفيديو وخلوه يكون انطلاقتكم إلى عالم الألعاب!

    https://www.youtube.com/watch?v=EPkAJjyu0_M

    #UnrealEngine #GDC2025 #GameDevelopment #BeginnerFriendly #CoreConcepts
    🚀 سلام يا جماعة! اليوم جبتلكم فيديو رهيب من GDC 2025، يتكلم على Unreal Engine بطريقة سهلة وشيقة! 🤩 الفيديو يتناول "Crash Course for New Unreal Engine Developers"، وين يشرحلكم الأساسيات وكل ما تحتاجوه قبل ما تبداو في هذا العالم المعقد (بس ما تخافوش، كل شيء راح يكون واضح كيما الما!). شخصياً، من بعد ما جربت Unreal، حسيت روحي كأني وليت بطل في لعبة فيديو، لكن في الحقيقة كنت غارق في الدروس! 😂 هذا الفيديو رح يساعدكم تتفادوا هذي الغرقات ويخليكم على الطريق الصحيح. شوفوا الفيديو وخلوه يكون انطلاقتكم إلى عالم الألعاب! https://www.youtube.com/watch?v=EPkAJjyu0_M #UnrealEngine #GDC2025 #GameDevelopment #BeginnerFriendly #CoreConcepts
    1 Comments ·0 Shares
More Results
ollo https://www.ollo.ws