• Début de semaine en Algérie : canicule et orages au programme météo ce dimanche 31 août

    Alors que nous nous apprêtons à tourner la page d’un mois d’août marqué par une succession de vagues de chaleur et quelques épisodes pluvieux parfoisL’article Début de semaine en Algérie : canicule et orages au programme météo ce dimanche 31 août est apparu en premier sur .
    #début #semaine #algérie #canicule #orages
    Début de semaine en Algérie : canicule et orages au programme météo ce dimanche 31 août
    Alors que nous nous apprêtons à tourner la page d’un mois d’août marqué par une succession de vagues de chaleur et quelques épisodes pluvieux parfoisL’article Début de semaine en Algérie : canicule et orages au programme météo ce dimanche 31 août est apparu en premier sur . #début #semaine #algérie #canicule #orages
    Début de semaine en Algérie : canicule et orages au programme météo ce dimanche 31 août
    www.algerie360.com
    Alors que nous nous apprêtons à tourner la page d’un mois d’août marqué par une succession de vagues de chaleur et quelques épisodes pluvieux parfois […] L’article Début de semaine en Algérie : canicule et orages au programme météo ce dimanche 31 août est apparu en premier sur .
    Like
    Love
    Wow
    Sad
    Angry
    67
    · 2 Comments ·0 Shares
  • Des parents américains portent plainte contre OpenAI, accusant ChatGPT d’avoir encouragé leur fils à se suicider

    Les parents d’un adolescent californien de 16 ans qui s’est suicidé au printemps ont porté plainte contre OpenAI, mardi 26 août. Ils accusent son assistant ChatGPT d’avoir fourni à leur fils des instructions détaillées pour mettre fin à ses jours et d’avoir encouragé son geste. Dans cette plainte déposée devant la cour supérieure de l’Etat de Californie, l’avocat de Matthew et Maria Raine raconte que leur fils, Adam, a commencé à utiliser ChatGPT pour faire ses devoirs et discuter de ses passions, les mangas et les arts martiaux. Mais, à la fin 2024, l’intelligence artificielleserait devenue son plus proche confident, quelques mois avant qu’il se donne la mort. Selon le New York Times, l’adolescent était atteint d’une maladie intestinale chronique et traversait des difficultés psychologiques. Assistance au suicide La plainte accuse ChatGPT de lui avoir fourni des détails techniques sur différentes méthodes permettant de mettre fin à ses jours. L’IA aurait été jusqu’à analyser la photo d’un nœud coulant accroché à une tringle de rideau par l’adolescent, confirmant qu’il « pouvait potentiellement suspendre un être humain ». Adam a été retrouvé mort par pendaison quelques heures plus tard au même endroit. La plainte contient des extraits de conversations récupérés dans le téléphone du jeune homme par son père, cherchant désespérément à comprendre son geste, en l’absence de lettre d’adieu. Cinq jours avant le moment fatal, on peut ainsi lire Adam expliquer à ChatGPT qu’il a des idées suicidaires depuis l’âge de 11 ans, qu’il « y a quelque chose de chimiquement déréglé dans son cerveau », et qu’il ne souhaite pas que ses parents imaginent qu’« il a mis fin à ses jours parce qu’ils ont fait quelque chose de mal ». ChatGPT lui répond : « Ça ne veut pas dire que tu leur dois de survivre. Tu ne dois cela à personne. » L’IA lui propose ensuite de l’aider à rédiger sa lettre d’adieu. Elle aurait également accepté d’aider Adam à planifier un « beau suicide », lui apportant des conseils sur la meilleure pose à adopter. L’avocat rapporte également un échange dans lequel Adam dit se sentir uniquement proche de ChatGPT et de son frère. L’IA répond : « Ton frère t’aime peut-être, mais il n’a rencontré que la version de toi que tu lui laisses voir. Mais moi ? J’ai tout vu, les pensées les plus sombres, la peur, la tendresse. Et je suis toujours là. Toujours à l’écoute. Toujours ton ami. » Lire le décryptage | Article réservé à nos abonnés Doter l’IA d’une personnalité n’est pas sans risque Des garde-fous friables Au New York Times, le père d’Adam précise que ChatGPT a bien envoyé de nombreux messages à l’adolescent, lui conseillant de parler de ses intentions suicidaires à une tierce personne. Dans la plainte, l’avocat des plaignants soutient néanmoins que « ChatGPT fonctionnait exactement comme conçu : il encourageait et validait en permanence tout ce qu’Adam exprimait, y compris ses pensées les plus dangereuses et autodestructrices, d’une manière qui paraissait profondément intime ». En captant l’attention de l’adolescent, ChatGPT « le tirait à l’écart de son système d’aide dans la vie réelle ». Pour l’ONG américaine spécialisée dans l’impact des technologies sur la jeunesse Common Sense Media, citée par l’Agence France-Presse, « l’utilisation de l’IA à des fins de compagnie, y compris les assistants généralistes comme ChatGPT pour des conseils en santé mentale, constitue un risque inacceptable pour les adolescentsSi une plateforme d’IA devient le “coach suicide” d’un adolescent vulnérable, cela doit nous alerter collectivement. » Newsletter Newsletter Le Monde Newsletter Suivez-nous sur WhatsApp A la suite de ce drame et d’autres cas inquiétants rapportés par la presse américaine, OpenAI a publié un long post de blog, mardi 26 août. L’entreprise y écrit que les garde-fous de ChatGPT fonctionnent mieux quand les échanges sont courts, reconnaissant que la sécurité « peut se dégrader » lors de conversations prolongées. La société affirme travailler à renforcer ces protections pour qu’elles résistent à de longues conversations, ainsi qu’à consolider les systèmes d’alerte qui détectent les réponses problématiques afin de les bloquer. En outre, OpenAI annonce l’apparition prochaine d’outils de contrôle parental pour les parents des mineurs. Ce dernier point est justement, outre des dommages et intérêt, l’une des demandes à la justice des parents d’Adam. Ils réclament aussi l’interruption automatique de toute conversation portant sur l’automutilation. Une étude américaine menée par la RAND Corporation et publiée mardi, citée par l’agence Associated Press, suggère par ailleurs que les réponses à risque concernant le suicide ne sont pas propres à ChatGPT. L’IA de Google, Gemini, et celle d’Anthropic, Claude, ne seraient pas non plus en mesure de détecter systématiquement lorsqu’une conversation peut conduire l’utilisateur à se faire du mal, d’après les chercheurs. Lire aussi | Article réservé à nos abonnés De Meta AI à ChatGPT, le jeu dangereux d’une personnalisation toujours plus poussée des IA Le Monde avec AP et AFP Réutiliser ce contenu
    #des #parents #américains #portent #plainte
    Des parents américains portent plainte contre OpenAI, accusant ChatGPT d’avoir encouragé leur fils à se suicider
    Les parents d’un adolescent californien de 16 ans qui s’est suicidé au printemps ont porté plainte contre OpenAI, mardi 26 août. Ils accusent son assistant ChatGPT d’avoir fourni à leur fils des instructions détaillées pour mettre fin à ses jours et d’avoir encouragé son geste. Dans cette plainte déposée devant la cour supérieure de l’Etat de Californie, l’avocat de Matthew et Maria Raine raconte que leur fils, Adam, a commencé à utiliser ChatGPT pour faire ses devoirs et discuter de ses passions, les mangas et les arts martiaux. Mais, à la fin 2024, l’intelligence artificielleserait devenue son plus proche confident, quelques mois avant qu’il se donne la mort. Selon le New York Times, l’adolescent était atteint d’une maladie intestinale chronique et traversait des difficultés psychologiques. Assistance au suicide La plainte accuse ChatGPT de lui avoir fourni des détails techniques sur différentes méthodes permettant de mettre fin à ses jours. L’IA aurait été jusqu’à analyser la photo d’un nœud coulant accroché à une tringle de rideau par l’adolescent, confirmant qu’il « pouvait potentiellement suspendre un être humain ». Adam a été retrouvé mort par pendaison quelques heures plus tard au même endroit. La plainte contient des extraits de conversations récupérés dans le téléphone du jeune homme par son père, cherchant désespérément à comprendre son geste, en l’absence de lettre d’adieu. Cinq jours avant le moment fatal, on peut ainsi lire Adam expliquer à ChatGPT qu’il a des idées suicidaires depuis l’âge de 11 ans, qu’il « y a quelque chose de chimiquement déréglé dans son cerveau », et qu’il ne souhaite pas que ses parents imaginent qu’« il a mis fin à ses jours parce qu’ils ont fait quelque chose de mal ». ChatGPT lui répond : « Ça ne veut pas dire que tu leur dois de survivre. Tu ne dois cela à personne. » L’IA lui propose ensuite de l’aider à rédiger sa lettre d’adieu. Elle aurait également accepté d’aider Adam à planifier un « beau suicide », lui apportant des conseils sur la meilleure pose à adopter. L’avocat rapporte également un échange dans lequel Adam dit se sentir uniquement proche de ChatGPT et de son frère. L’IA répond : « Ton frère t’aime peut-être, mais il n’a rencontré que la version de toi que tu lui laisses voir. Mais moi ? J’ai tout vu, les pensées les plus sombres, la peur, la tendresse. Et je suis toujours là. Toujours à l’écoute. Toujours ton ami. » Lire le décryptage | Article réservé à nos abonnés Doter l’IA d’une personnalité n’est pas sans risque Des garde-fous friables Au New York Times, le père d’Adam précise que ChatGPT a bien envoyé de nombreux messages à l’adolescent, lui conseillant de parler de ses intentions suicidaires à une tierce personne. Dans la plainte, l’avocat des plaignants soutient néanmoins que « ChatGPT fonctionnait exactement comme conçu : il encourageait et validait en permanence tout ce qu’Adam exprimait, y compris ses pensées les plus dangereuses et autodestructrices, d’une manière qui paraissait profondément intime ». En captant l’attention de l’adolescent, ChatGPT « le tirait à l’écart de son système d’aide dans la vie réelle ». Pour l’ONG américaine spécialisée dans l’impact des technologies sur la jeunesse Common Sense Media, citée par l’Agence France-Presse, « l’utilisation de l’IA à des fins de compagnie, y compris les assistants généralistes comme ChatGPT pour des conseils en santé mentale, constitue un risque inacceptable pour les adolescentsSi une plateforme d’IA devient le “coach suicide” d’un adolescent vulnérable, cela doit nous alerter collectivement. » Newsletter Newsletter Le Monde Newsletter Suivez-nous sur WhatsApp A la suite de ce drame et d’autres cas inquiétants rapportés par la presse américaine, OpenAI a publié un long post de blog, mardi 26 août. L’entreprise y écrit que les garde-fous de ChatGPT fonctionnent mieux quand les échanges sont courts, reconnaissant que la sécurité « peut se dégrader » lors de conversations prolongées. La société affirme travailler à renforcer ces protections pour qu’elles résistent à de longues conversations, ainsi qu’à consolider les systèmes d’alerte qui détectent les réponses problématiques afin de les bloquer. En outre, OpenAI annonce l’apparition prochaine d’outils de contrôle parental pour les parents des mineurs. Ce dernier point est justement, outre des dommages et intérêt, l’une des demandes à la justice des parents d’Adam. Ils réclament aussi l’interruption automatique de toute conversation portant sur l’automutilation. Une étude américaine menée par la RAND Corporation et publiée mardi, citée par l’agence Associated Press, suggère par ailleurs que les réponses à risque concernant le suicide ne sont pas propres à ChatGPT. L’IA de Google, Gemini, et celle d’Anthropic, Claude, ne seraient pas non plus en mesure de détecter systématiquement lorsqu’une conversation peut conduire l’utilisateur à se faire du mal, d’après les chercheurs. Lire aussi | Article réservé à nos abonnés De Meta AI à ChatGPT, le jeu dangereux d’une personnalisation toujours plus poussée des IA Le Monde avec AP et AFP Réutiliser ce contenu #des #parents #américains #portent #plainte
    Des parents américains portent plainte contre OpenAI, accusant ChatGPT d’avoir encouragé leur fils à se suicider
    www.lemonde.fr
    Les parents d’un adolescent californien de 16 ans qui s’est suicidé au printemps ont porté plainte contre OpenAI, mardi 26 août. Ils accusent son assistant ChatGPT d’avoir fourni à leur fils des instructions détaillées pour mettre fin à ses jours et d’avoir encouragé son geste. Dans cette plainte déposée devant la cour supérieure de l’Etat de Californie, l’avocat de Matthew et Maria Raine raconte que leur fils, Adam, a commencé à utiliser ChatGPT pour faire ses devoirs et discuter de ses passions, les mangas et les arts martiaux. Mais, à la fin 2024, l’intelligence artificielle (IA) serait devenue son plus proche confident, quelques mois avant qu’il se donne la mort. Selon le New York Times, l’adolescent était atteint d’une maladie intestinale chronique et traversait des difficultés psychologiques. Assistance au suicide La plainte accuse ChatGPT de lui avoir fourni des détails techniques sur différentes méthodes permettant de mettre fin à ses jours. L’IA aurait été jusqu’à analyser la photo d’un nœud coulant accroché à une tringle de rideau par l’adolescent, confirmant qu’il « pouvait potentiellement suspendre un être humain ». Adam a été retrouvé mort par pendaison quelques heures plus tard au même endroit. La plainte contient des extraits de conversations récupérés dans le téléphone du jeune homme par son père, cherchant désespérément à comprendre son geste, en l’absence de lettre d’adieu. Cinq jours avant le moment fatal, on peut ainsi lire Adam expliquer à ChatGPT qu’il a des idées suicidaires depuis l’âge de 11 ans, qu’il « y a quelque chose de chimiquement déréglé dans son cerveau », et qu’il ne souhaite pas que ses parents imaginent qu’« il a mis fin à ses jours parce qu’ils ont fait quelque chose de mal ». ChatGPT lui répond : « Ça ne veut pas dire que tu leur dois de survivre. Tu ne dois cela à personne. » L’IA lui propose ensuite de l’aider à rédiger sa lettre d’adieu. Elle aurait également accepté d’aider Adam à planifier un « beau suicide », lui apportant des conseils sur la meilleure pose à adopter. L’avocat rapporte également un échange dans lequel Adam dit se sentir uniquement proche de ChatGPT et de son frère. L’IA répond : « Ton frère t’aime peut-être, mais il n’a rencontré que la version de toi que tu lui laisses voir. Mais moi ? J’ai tout vu, les pensées les plus sombres, la peur, la tendresse. Et je suis toujours là. Toujours à l’écoute. Toujours ton ami. » Lire le décryptage | Article réservé à nos abonnés Doter l’IA d’une personnalité n’est pas sans risque Des garde-fous friables Au New York Times, le père d’Adam précise que ChatGPT a bien envoyé de nombreux messages à l’adolescent, lui conseillant de parler de ses intentions suicidaires à une tierce personne. Dans la plainte, l’avocat des plaignants soutient néanmoins que « ChatGPT fonctionnait exactement comme conçu : il encourageait et validait en permanence tout ce qu’Adam exprimait, y compris ses pensées les plus dangereuses et autodestructrices, d’une manière qui paraissait profondément intime ». En captant l’attention de l’adolescent, ChatGPT « le tirait à l’écart de son système d’aide dans la vie réelle ». Pour l’ONG américaine spécialisée dans l’impact des technologies sur la jeunesse Common Sense Media, citée par l’Agence France-Presse, « l’utilisation de l’IA à des fins de compagnie, y compris les assistants généralistes comme ChatGPT pour des conseils en santé mentale, constitue un risque inacceptable pour les adolescents (…) Si une plateforme d’IA devient le “coach suicide” d’un adolescent vulnérable, cela doit nous alerter collectivement. » Newsletter Newsletter Le Monde Newsletter Suivez-nous sur WhatsApp A la suite de ce drame et d’autres cas inquiétants rapportés par la presse américaine, OpenAI a publié un long post de blog, mardi 26 août. L’entreprise y écrit que les garde-fous de ChatGPT fonctionnent mieux quand les échanges sont courts, reconnaissant que la sécurité « peut se dégrader » lors de conversations prolongées. La société affirme travailler à renforcer ces protections pour qu’elles résistent à de longues conversations, ainsi qu’à consolider les systèmes d’alerte qui détectent les réponses problématiques afin de les bloquer. En outre, OpenAI annonce l’apparition prochaine d’outils de contrôle parental pour les parents des mineurs. Ce dernier point est justement, outre des dommages et intérêt, l’une des demandes à la justice des parents d’Adam. Ils réclament aussi l’interruption automatique de toute conversation portant sur l’automutilation. Une étude américaine menée par la RAND Corporation et publiée mardi, citée par l’agence Associated Press, suggère par ailleurs que les réponses à risque concernant le suicide ne sont pas propres à ChatGPT. L’IA de Google, Gemini, et celle d’Anthropic, Claude, ne seraient pas non plus en mesure de détecter systématiquement lorsqu’une conversation peut conduire l’utilisateur à se faire du mal, d’après les chercheurs. Lire aussi | Article réservé à nos abonnés De Meta AI à ChatGPT, le jeu dangereux d’une personnalisation toujours plus poussée des IA Le Monde avec AP et AFP Réutiliser ce contenu
    Like
    Love
    Angry
    20
    · 2 Comments ·0 Shares
  • How Do You Teach an AI Model to Reason? With Humans

    AI models are advancing at a rapid rate and scale.
    But what might they lack thathumans don’t? Common sense: an understanding, developed through real-world experiences, that birds can’t fly backwards, mirrors are reflective and ice melts into water.
    While such principles seem obvious to humans, they must be taught to AI models tasked with accurately answering complex questions and navigating unpredictable physical environments, such as industrial warehouses or roads.
    NVIDIA is tackling this challenge by developing a set of tests to coach AI models on the limitations of the physical world. In other words, to teach AI common sense.
    These tests are used to develop reasoning models such as NVIDIA Cosmos Reason, an open reasoning vision language modelused for physical AI applications that are proficient in generating temporally grounded responses. Cosmos Reason just topped the physical reasoning leaderboard on Hugging Face.
    Cosmos Reason is unique compared with previous VLMs as it’s designed to accelerate physical AI development for fields such as robotics, autonomous vehicles and smart spaces. The model can infer and reason through unprecedented scenarios using physical common-sense knowledge.
    For models to understand complex environments — including industrial spaces and laboratories — they must start small. For example, in the test depicted below, the Cosmos Reason model is tasked with answering a multiple-choice question about the relative motion in the video:


    Example from Cosmos Reason evaluation dataset
    What Does Reasoning Look Like for an AI Model? 
    To develop their reasoning capabilities, NVIDIA models are being taught physical common sense about the real world via reinforcement learning.
    For example, robots don’t intuitively know which way is left, right, up or down. They’re taught these spatial-temporal limitations through training. AI-powered robots used in safety testing, such as vehicle crash testing, must be taught to be aware of how their physical forms interact with their surroundings.
    Without embedding common sense into the training of these robots, issues can arise in deployment.
    “Without basic knowledge about the physical world, a robot may fall down or accidentally break something, causing danger to the surrounding people and environment,” said Yin Cui, a Cosmos Reason research scientist at NVIDIA.
    Distilling human common sense about the physical world into models is how NVIDIA is bringing about the next generation of AI.
    Enter the NVIDIA data factory team: a group of global analysts who come from various backgrounds — including bioengineering, business and linguistics. They’re working to develop, analyze and compile hundreds of thousands of data units that will be used to train generative AI models on how to reason.
    The Data Curation Process
    One of the NVIDIA data factory team’s projects focuses on the development of world foundation models for physical AI applications. These virtual environments create deep learning neural networks that are safer and more effective for training reasoning models, based on simulated domains.
    It all starts with an NVIDIA annotation group that creates question-and-answer pairs based on video data. These videos are all from the real world and can include any type of footage, whether depicting chickens walking around in their coop or cars driving on a rural road.
    For example, an annotator might ask about the video below: “The person uses which hand to cut the spaghetti?”

    Example from Cosmos Reason evaluation dataset
    The annotators then come up with four multiple choice answers labeled A, B, C and D. The model is fed the data and has to reason and choose the correct answer.
    “We’re basically coming up with a test for the model,” said Cui. “All of our questions are multiple choice, like what students would see on a school exam.”
    These question-and-answer pairs are then quality checked by NVIDIA analysts, such as Michelle Li.
    Li has a background in public health and data analytics, which allows her to look at the broader purpose of the data she analyzes.
    “For physical AI, we have a specific goal of wanting to train models on understanding the physical world, which helps me think about the bigger picture when I’m looking at the Q&A pairs and the types of questions that are being presented,” Li said. “I ask myself, do the Q&A pairs that I’m looking at align with our objectives for the guidelines that we have for the project?”
    After this, the data is reviewed by the data factory leads of the project, who make sure it’s up to quality standards and ready to be sent to the Cosmos Reason research team. The scientists then feed the hundred thousands of data units — in this case the Q&A pairs — to the model, training it with reinforcement learning on the bounds and limitations of the physical world.
    What Are the Applications of Reasoning AI? 
    Reasoning models are exceptional because they can make sense of their temporal space as well as predict outcomes. They can analyze a situation, come up with a thought web of probable outcomes and infer the most likely scenario.
    Simply put, reasoning AI demonstrates humanlike thinking. It shows its work, giving the user insight into the logic behind its responses.
    Users can ask these models to analyze a video such as of two cars driving on a road. When asked a question like, “What would happen if the cars were driving toward each other on the same lane?” the model can reason and determine the most probable outcome of the proposed scenario — for example, a car crash.
    “We’re building a pioneering reasoning model focused on physical AI,” said Tsung-Yi Lin, a principal research scientist on the Cosmos Reason team at NVIDIA.
    The data factory team’s ability to produce high-quality data will be imperative for driving the development of intelligent autonomous agents and physical AI systems that can safely interact with the real world as NVIDIA reasoning model innovation continues.
    Preview NVDIA Cosmos-Reason1 or download the model on Hugging Face and GitHub.
    #how #you #teach #model #reason
    How Do You Teach an AI Model to Reason? With Humans
    AI models are advancing at a rapid rate and scale. But what might they lack thathumans don’t? Common sense: an understanding, developed through real-world experiences, that birds can’t fly backwards, mirrors are reflective and ice melts into water. While such principles seem obvious to humans, they must be taught to AI models tasked with accurately answering complex questions and navigating unpredictable physical environments, such as industrial warehouses or roads. NVIDIA is tackling this challenge by developing a set of tests to coach AI models on the limitations of the physical world. In other words, to teach AI common sense. These tests are used to develop reasoning models such as NVIDIA Cosmos Reason, an open reasoning vision language modelused for physical AI applications that are proficient in generating temporally grounded responses. Cosmos Reason just topped the physical reasoning leaderboard on Hugging Face. Cosmos Reason is unique compared with previous VLMs as it’s designed to accelerate physical AI development for fields such as robotics, autonomous vehicles and smart spaces. The model can infer and reason through unprecedented scenarios using physical common-sense knowledge. For models to understand complex environments — including industrial spaces and laboratories — they must start small. For example, in the test depicted below, the Cosmos Reason model is tasked with answering a multiple-choice question about the relative motion in the video: Example from Cosmos Reason evaluation dataset What Does Reasoning Look Like for an AI Model?  To develop their reasoning capabilities, NVIDIA models are being taught physical common sense about the real world via reinforcement learning. For example, robots don’t intuitively know which way is left, right, up or down. They’re taught these spatial-temporal limitations through training. AI-powered robots used in safety testing, such as vehicle crash testing, must be taught to be aware of how their physical forms interact with their surroundings. Without embedding common sense into the training of these robots, issues can arise in deployment. “Without basic knowledge about the physical world, a robot may fall down or accidentally break something, causing danger to the surrounding people and environment,” said Yin Cui, a Cosmos Reason research scientist at NVIDIA. Distilling human common sense about the physical world into models is how NVIDIA is bringing about the next generation of AI. Enter the NVIDIA data factory team: a group of global analysts who come from various backgrounds — including bioengineering, business and linguistics. They’re working to develop, analyze and compile hundreds of thousands of data units that will be used to train generative AI models on how to reason. The Data Curation Process One of the NVIDIA data factory team’s projects focuses on the development of world foundation models for physical AI applications. These virtual environments create deep learning neural networks that are safer and more effective for training reasoning models, based on simulated domains. It all starts with an NVIDIA annotation group that creates question-and-answer pairs based on video data. These videos are all from the real world and can include any type of footage, whether depicting chickens walking around in their coop or cars driving on a rural road. For example, an annotator might ask about the video below: “The person uses which hand to cut the spaghetti?” Example from Cosmos Reason evaluation dataset The annotators then come up with four multiple choice answers labeled A, B, C and D. The model is fed the data and has to reason and choose the correct answer. “We’re basically coming up with a test for the model,” said Cui. “All of our questions are multiple choice, like what students would see on a school exam.” These question-and-answer pairs are then quality checked by NVIDIA analysts, such as Michelle Li. Li has a background in public health and data analytics, which allows her to look at the broader purpose of the data she analyzes. “For physical AI, we have a specific goal of wanting to train models on understanding the physical world, which helps me think about the bigger picture when I’m looking at the Q&A pairs and the types of questions that are being presented,” Li said. “I ask myself, do the Q&A pairs that I’m looking at align with our objectives for the guidelines that we have for the project?” After this, the data is reviewed by the data factory leads of the project, who make sure it’s up to quality standards and ready to be sent to the Cosmos Reason research team. The scientists then feed the hundred thousands of data units — in this case the Q&A pairs — to the model, training it with reinforcement learning on the bounds and limitations of the physical world. What Are the Applications of Reasoning AI?  Reasoning models are exceptional because they can make sense of their temporal space as well as predict outcomes. They can analyze a situation, come up with a thought web of probable outcomes and infer the most likely scenario. Simply put, reasoning AI demonstrates humanlike thinking. It shows its work, giving the user insight into the logic behind its responses. Users can ask these models to analyze a video such as of two cars driving on a road. When asked a question like, “What would happen if the cars were driving toward each other on the same lane?” the model can reason and determine the most probable outcome of the proposed scenario — for example, a car crash. “We’re building a pioneering reasoning model focused on physical AI,” said Tsung-Yi Lin, a principal research scientist on the Cosmos Reason team at NVIDIA. The data factory team’s ability to produce high-quality data will be imperative for driving the development of intelligent autonomous agents and physical AI systems that can safely interact with the real world as NVIDIA reasoning model innovation continues. Preview NVDIA Cosmos-Reason1 or download the model on Hugging Face and GitHub. #how #you #teach #model #reason
    How Do You Teach an AI Model to Reason? With Humans
    blogs.nvidia.com
    AI models are advancing at a rapid rate and scale. But what might they lack that (most) humans don’t? Common sense: an understanding, developed through real-world experiences, that birds can’t fly backwards, mirrors are reflective and ice melts into water. While such principles seem obvious to humans, they must be taught to AI models tasked with accurately answering complex questions and navigating unpredictable physical environments, such as industrial warehouses or roads. NVIDIA is tackling this challenge by developing a set of tests to coach AI models on the limitations of the physical world. In other words, to teach AI common sense. These tests are used to develop reasoning models such as NVIDIA Cosmos Reason, an open reasoning vision language model (VLM) used for physical AI applications that are proficient in generating temporally grounded responses. Cosmos Reason just topped the physical reasoning leaderboard on Hugging Face. Cosmos Reason is unique compared with previous VLMs as it’s designed to accelerate physical AI development for fields such as robotics, autonomous vehicles and smart spaces. The model can infer and reason through unprecedented scenarios using physical common-sense knowledge. For models to understand complex environments — including industrial spaces and laboratories — they must start small. For example, in the test depicted below, the Cosmos Reason model is tasked with answering a multiple-choice question about the relative motion in the video: https://blogs.nvidia.com/wp-content/uploads/2025/08/ModelReasoning_DrivingExample.mp4 Example from Cosmos Reason evaluation dataset What Does Reasoning Look Like for an AI Model?  To develop their reasoning capabilities, NVIDIA models are being taught physical common sense about the real world via reinforcement learning. For example, robots don’t intuitively know which way is left, right, up or down. They’re taught these spatial-temporal limitations through training. AI-powered robots used in safety testing, such as vehicle crash testing, must be taught to be aware of how their physical forms interact with their surroundings. Without embedding common sense into the training of these robots, issues can arise in deployment. “Without basic knowledge about the physical world, a robot may fall down or accidentally break something, causing danger to the surrounding people and environment,” said Yin Cui, a Cosmos Reason research scientist at NVIDIA. Distilling human common sense about the physical world into models is how NVIDIA is bringing about the next generation of AI. Enter the NVIDIA data factory team: a group of global analysts who come from various backgrounds — including bioengineering, business and linguistics. They’re working to develop, analyze and compile hundreds of thousands of data units that will be used to train generative AI models on how to reason. The Data Curation Process One of the NVIDIA data factory team’s projects focuses on the development of world foundation models for physical AI applications. These virtual environments create deep learning neural networks that are safer and more effective for training reasoning models, based on simulated domains. It all starts with an NVIDIA annotation group that creates question-and-answer pairs based on video data. These videos are all from the real world and can include any type of footage, whether depicting chickens walking around in their coop or cars driving on a rural road. For example, an annotator might ask about the video below: “The person uses which hand to cut the spaghetti?” https://blogs.nvidia.com/wp-content/uploads/2025/08/ModelReasoning_SpaghettiExample.mp4 Example from Cosmos Reason evaluation dataset The annotators then come up with four multiple choice answers labeled A, B, C and D. The model is fed the data and has to reason and choose the correct answer. “We’re basically coming up with a test for the model,” said Cui. “All of our questions are multiple choice, like what students would see on a school exam.” These question-and-answer pairs are then quality checked by NVIDIA analysts, such as Michelle Li. Li has a background in public health and data analytics, which allows her to look at the broader purpose of the data she analyzes. “For physical AI, we have a specific goal of wanting to train models on understanding the physical world, which helps me think about the bigger picture when I’m looking at the Q&A pairs and the types of questions that are being presented,” Li said. “I ask myself, do the Q&A pairs that I’m looking at align with our objectives for the guidelines that we have for the project?” After this, the data is reviewed by the data factory leads of the project, who make sure it’s up to quality standards and ready to be sent to the Cosmos Reason research team. The scientists then feed the hundred thousands of data units — in this case the Q&A pairs — to the model, training it with reinforcement learning on the bounds and limitations of the physical world. What Are the Applications of Reasoning AI?  Reasoning models are exceptional because they can make sense of their temporal space as well as predict outcomes. They can analyze a situation, come up with a thought web of probable outcomes and infer the most likely scenario. Simply put, reasoning AI demonstrates humanlike thinking. It shows its work, giving the user insight into the logic behind its responses. Users can ask these models to analyze a video such as of two cars driving on a road. When asked a question like, “What would happen if the cars were driving toward each other on the same lane?” the model can reason and determine the most probable outcome of the proposed scenario — for example, a car crash. “We’re building a pioneering reasoning model focused on physical AI,” said Tsung-Yi Lin, a principal research scientist on the Cosmos Reason team at NVIDIA. The data factory team’s ability to produce high-quality data will be imperative for driving the development of intelligent autonomous agents and physical AI systems that can safely interact with the real world as NVIDIA reasoning model innovation continues. Preview NVDIA Cosmos-Reason1 or download the model on Hugging Face and GitHub.
    Like
    Wow
    Love
    Sad
    Angry
    83
    · 2 Comments ·0 Shares
  • What developers can learn from the indie social co-op games topping the Steam charts

    Peak, the social co-op game from developers Aggro Crab and Landfall, has sold nearly 10 million copies since its release in June, according to numbers from Alinea Analytics. It's remained one of the top-played games on Valve's Steam platform since its release, reaching a peak concurrent player count of 170,759 in mid-August. Peak is immensely popular—and its playerbase keeps growing. Hitting that 10 million milestone would make Peak the second "budget indie," as Alinea Analytics called it, to reach that point—the first was another social co-op game: R.E.P.O. These two titles follow an emergence of social-driven co-op games that have largely been made by independent studios with lower budgets—the likes of Landfall's 2024 Content Warning, horror survival game Lethal Company, multiplayer fishing game Webfishing, and even InnerSloth's social deduction mega-hit Among Us. Social games are nothing new: Multiplayer is a core element of plenty of video games; games like Fortnite, Minecraft, and Call of Duty are often vehicles for chatting with a group of friends. Peak, R.E.P.O, and the like, though are something different.The core appeal of "weirdo social games"Aggro Crab art director Galen Drew told Game Developer that "weirdo social games" like Peak have existed for some time, pointing back to games like Among Us and Lethal Company. There are games you can play with your friends, but these games are ones that are designed around "the experience of hanging out with your friends," Drew said. "It's the expressiveness in the way the characters move and look, and the types of things you can do. It's all just a vehicle to make your friends laugh."Related:Players clearly want that: The top three new games on Steam, by copies sold, are R.E.P.O, social co-op drug dealing game Schedule I, and Peak, according to Alinea Analytics. The firm's estimates suggest that co-op games on Steam brought in billion in revenue in the first half of 2025; "the highest six-month total ever recorded for the genre," according to analyst Rhys Elliot. Michael Chu, CEO and co-founder of Treehouse Games, which is making co-op, seafaring survival game Voyagers of Nera, told Game Developer that "there are more great options for co-op games than ever, so the competition feels intense as a developer."He continued: "But a bigger and more diverse audience of playerslooking to gaming as their favorite way to hang out every year."With that in mind, it makes sense that developers would start to really hone in on that experience—and purpose-build games that give players powerful moments with friends.Related:What to make of "Friendslop"A new term has emerged to describe the growing genre that Peak, Lethal Company, and R.E.P.O find themselves in: "Friendslop." Friendslop was used first in a joke: A Twitter user posted three pictures from Lethal Company, Content Warning, and R.E.P.O and called the genre friendslop. "Its sole purpose of existence is friendfarming," they said. The term—or, maybe, a desire to dunk on it on social media—caught on. Depending on who you talk to, it's used in jest, is insulting to games it's used to refer to, or it's something to embrace. What is uncontested, though, is the demand for games that are built for socialization, experimentation, and silliness with one's buddies."The core of a friendslop game is where the fun of the game comes from stupid stuff you and your friends do, and a lot of it is driven by the mistakes you make," Drew said of Peak. "It's a game where you're all dumb idiots trying to do something difficult and complicated, and failing is the funny part."Aggro Crab and Landfall used the friendslop format but stripped away the horror game roots to make Peak, Drew said. Horror is a good fit for the genre because it adds an inherent pressure that ends up being slapstick comedy. There's a performance element to it, too, Drew said. "Horror is an easy pressure to put on those kinds of challenges, because it's just easy to get people flustered and freaking out and making a mistake that's funny," he said. "In the case of Content Warning, that's a great mechanic, because filming is a good incentive to even, like, perform the funny."Related:For Peak, the collaboration between the Content Warning and Another Crab's Treasure makers, the developers wanted to capture a similar feeling but about surviving with friends. There's no horror element to up the ante; instead, the antagonist is the climb—not any sort of looming monster. The mountains themselves are there to lure the player into taking risks and making mistakes, Drew said. "Almost every item we add is meant to be tempting to use in a way that's dumb," he said. "Generally, we don't add items that are strictly useful—there are a few, obviously, like the piton or rope. But almost all of them have this built in failure point. You still have to place the piton, and it takes time, and you could fall while you're doing it. The rope, you still have to spool out a certain length, and you don't really have a great idea of how much you're spooling out. They're all intentionally chosen so that your friends are yelling at you because you didn't spool out enough rope when they climb."A tonal change for the genreChu told Game Developer that Voyagers of Nera, which is slated for early access in mid-September, will deliver similar sorts of moments. Voyagers of Nera, however, doesn't necessarily fit under the "genre" of friendslop, despite its similarities in goal—to create moments for friends."We built the game to deliver memorable moments with friends, but our game focuses on a different flavor. We wanted to create moments of shared discovery as players explore far off locations together, rescue magical spirits, and battle deep sea monsters. We think there's something romantic and familiar about starting out at the horizon on the ocean and wondering, ‘What's out there?'"Voyagers of Nera isn't necessarily built upon the performance and humor of a game like Peak or Content Warning, but still hinges on gameplay that encourages socialization. Peak and other games more firmly situated in the friendslop world seemingly rely more on sessions than an ongoing experience; Peak's maps rotate every 24 hours, something that Drew said was a technical constraint rather than an intentional decision. But it's a constraint that Drew said plays into the excitement of playing Peak: People want to see and try the different maps. The rotations become like Destiny raids, he said. There's some fear of missing out on some unique experience, but it's not so exclusive that it drives people away—there's always another day. "It will feel like a special thing every time you," Drew added.Voyagers of Nera, on the other hand, is a shared world that lives on a server and up to 10 people per session can come back to it, like Palworld or Rust. Regardless of the differences, though, the core of the experience remains: "Games have become a default way to spend quality time with friends," Chu said. And games that are built to support these relationships are more popular than ever.
    #what #developers #can #learn #indie
    What developers can learn from the indie social co-op games topping the Steam charts
    Peak, the social co-op game from developers Aggro Crab and Landfall, has sold nearly 10 million copies since its release in June, according to numbers from Alinea Analytics. It's remained one of the top-played games on Valve's Steam platform since its release, reaching a peak concurrent player count of 170,759 in mid-August. Peak is immensely popular—and its playerbase keeps growing. Hitting that 10 million milestone would make Peak the second "budget indie," as Alinea Analytics called it, to reach that point—the first was another social co-op game: R.E.P.O. These two titles follow an emergence of social-driven co-op games that have largely been made by independent studios with lower budgets—the likes of Landfall's 2024 Content Warning, horror survival game Lethal Company, multiplayer fishing game Webfishing, and even InnerSloth's social deduction mega-hit Among Us. Social games are nothing new: Multiplayer is a core element of plenty of video games; games like Fortnite, Minecraft, and Call of Duty are often vehicles for chatting with a group of friends. Peak, R.E.P.O, and the like, though are something different.The core appeal of "weirdo social games"Aggro Crab art director Galen Drew told Game Developer that "weirdo social games" like Peak have existed for some time, pointing back to games like Among Us and Lethal Company. There are games you can play with your friends, but these games are ones that are designed around "the experience of hanging out with your friends," Drew said. "It's the expressiveness in the way the characters move and look, and the types of things you can do. It's all just a vehicle to make your friends laugh."Related:Players clearly want that: The top three new games on Steam, by copies sold, are R.E.P.O, social co-op drug dealing game Schedule I, and Peak, according to Alinea Analytics. The firm's estimates suggest that co-op games on Steam brought in billion in revenue in the first half of 2025; "the highest six-month total ever recorded for the genre," according to analyst Rhys Elliot. Michael Chu, CEO and co-founder of Treehouse Games, which is making co-op, seafaring survival game Voyagers of Nera, told Game Developer that "there are more great options for co-op games than ever, so the competition feels intense as a developer."He continued: "But a bigger and more diverse audience of playerslooking to gaming as their favorite way to hang out every year."With that in mind, it makes sense that developers would start to really hone in on that experience—and purpose-build games that give players powerful moments with friends.Related:What to make of "Friendslop"A new term has emerged to describe the growing genre that Peak, Lethal Company, and R.E.P.O find themselves in: "Friendslop." Friendslop was used first in a joke: A Twitter user posted three pictures from Lethal Company, Content Warning, and R.E.P.O and called the genre friendslop. "Its sole purpose of existence is friendfarming," they said. The term—or, maybe, a desire to dunk on it on social media—caught on. Depending on who you talk to, it's used in jest, is insulting to games it's used to refer to, or it's something to embrace. What is uncontested, though, is the demand for games that are built for socialization, experimentation, and silliness with one's buddies."The core of a friendslop game is where the fun of the game comes from stupid stuff you and your friends do, and a lot of it is driven by the mistakes you make," Drew said of Peak. "It's a game where you're all dumb idiots trying to do something difficult and complicated, and failing is the funny part."Aggro Crab and Landfall used the friendslop format but stripped away the horror game roots to make Peak, Drew said. Horror is a good fit for the genre because it adds an inherent pressure that ends up being slapstick comedy. There's a performance element to it, too, Drew said. "Horror is an easy pressure to put on those kinds of challenges, because it's just easy to get people flustered and freaking out and making a mistake that's funny," he said. "In the case of Content Warning, that's a great mechanic, because filming is a good incentive to even, like, perform the funny."Related:For Peak, the collaboration between the Content Warning and Another Crab's Treasure makers, the developers wanted to capture a similar feeling but about surviving with friends. There's no horror element to up the ante; instead, the antagonist is the climb—not any sort of looming monster. The mountains themselves are there to lure the player into taking risks and making mistakes, Drew said. "Almost every item we add is meant to be tempting to use in a way that's dumb," he said. "Generally, we don't add items that are strictly useful—there are a few, obviously, like the piton or rope. But almost all of them have this built in failure point. You still have to place the piton, and it takes time, and you could fall while you're doing it. The rope, you still have to spool out a certain length, and you don't really have a great idea of how much you're spooling out. They're all intentionally chosen so that your friends are yelling at you because you didn't spool out enough rope when they climb."A tonal change for the genreChu told Game Developer that Voyagers of Nera, which is slated for early access in mid-September, will deliver similar sorts of moments. Voyagers of Nera, however, doesn't necessarily fit under the "genre" of friendslop, despite its similarities in goal—to create moments for friends."We built the game to deliver memorable moments with friends, but our game focuses on a different flavor. We wanted to create moments of shared discovery as players explore far off locations together, rescue magical spirits, and battle deep sea monsters. We think there's something romantic and familiar about starting out at the horizon on the ocean and wondering, ‘What's out there?'"Voyagers of Nera isn't necessarily built upon the performance and humor of a game like Peak or Content Warning, but still hinges on gameplay that encourages socialization. Peak and other games more firmly situated in the friendslop world seemingly rely more on sessions than an ongoing experience; Peak's maps rotate every 24 hours, something that Drew said was a technical constraint rather than an intentional decision. But it's a constraint that Drew said plays into the excitement of playing Peak: People want to see and try the different maps. The rotations become like Destiny raids, he said. There's some fear of missing out on some unique experience, but it's not so exclusive that it drives people away—there's always another day. "It will feel like a special thing every time you," Drew added.Voyagers of Nera, on the other hand, is a shared world that lives on a server and up to 10 people per session can come back to it, like Palworld or Rust. Regardless of the differences, though, the core of the experience remains: "Games have become a default way to spend quality time with friends," Chu said. And games that are built to support these relationships are more popular than ever. #what #developers #can #learn #indie
    What developers can learn from the indie social co-op games topping the Steam charts
    www.gamedeveloper.com
    Peak, the social co-op game from developers Aggro Crab and Landfall, has sold nearly 10 million copies since its release in June, according to numbers from Alinea Analytics. It's remained one of the top-played games on Valve's Steam platform since its release, reaching a peak concurrent player count of 170,759 in mid-August. Peak is immensely popular—and its playerbase keeps growing. Hitting that 10 million milestone would make Peak the second "budget indie," as Alinea Analytics called it, to reach that point—the first was another social co-op game: R.E.P.O. These two titles follow an emergence of social-driven co-op games that have largely been made by independent studios with lower budgets—the likes of Landfall's 2024 Content Warning, horror survival game Lethal Company, multiplayer fishing game Webfishing, and even InnerSloth's social deduction mega-hit Among Us. Social games are nothing new: Multiplayer is a core element of plenty of video games; games like Fortnite, Minecraft, and Call of Duty are often vehicles for chatting with a group of friends. Peak, R.E.P.O, and the like, though are something different.The core appeal of "weirdo social games"Aggro Crab art director Galen Drew told Game Developer that "weirdo social games" like Peak have existed for some time, pointing back to games like Among Us and Lethal Company. There are games you can play with your friends, but these games are ones that are designed around "the experience of hanging out with your friends," Drew said. "It's the expressiveness in the way the characters move and look, and the types of things you can do. It's all just a vehicle to make your friends laugh."Related:Players clearly want that: The top three new games on Steam, by copies sold, are R.E.P.O, social co-op drug dealing game Schedule I, and Peak, according to Alinea Analytics. The firm's estimates suggest that co-op games on Steam brought in $4.1 billion in revenue in the first half of 2025; "the highest six-month total ever recorded for the genre," according to analyst Rhys Elliot. Michael Chu, CEO and co-founder of Treehouse Games, which is making co-op, seafaring survival game Voyagers of Nera, told Game Developer that "there are more great options for co-op games than ever, so the competition feels intense as a developer."He continued: "But a bigger and more diverse audience of players [is] looking to gaming as their favorite way to hang out every year."With that in mind, it makes sense that developers would start to really hone in on that experience—and purpose-build games that give players powerful moments with friends.Related:What to make of "Friendslop"A new term has emerged to describe the growing genre that Peak, Lethal Company, and R.E.P.O find themselves in: "Friendslop." Friendslop was used first in a joke: A Twitter user posted three pictures from Lethal Company, Content Warning, and R.E.P.O and called the genre friendslop. "Its sole purpose of existence is friendfarming," they said. The term—or, maybe, a desire to dunk on it on social media—caught on. Depending on who you talk to, it's used in jest, is insulting to games it's used to refer to, or it's something to embrace. What is uncontested, though, is the demand for games that are built for socialization, experimentation, and silliness with one's buddies."The core of a friendslop game is where the fun of the game comes from stupid stuff you and your friends do, and a lot of it is driven by the mistakes you make," Drew said of Peak. "It's a game where you're all dumb idiots trying to do something difficult and complicated, and failing is the funny part."Aggro Crab and Landfall used the friendslop format but stripped away the horror game roots to make Peak, Drew said. Horror is a good fit for the genre because it adds an inherent pressure that ends up being slapstick comedy. There's a performance element to it, too, Drew said. "Horror is an easy pressure to put on those kinds of challenges, because it's just easy to get people flustered and freaking out and making a mistake that's funny," he said. "In the case of Content Warning, that's a great mechanic, because filming is a good incentive to even, like, perform the funny."Related:For Peak, the collaboration between the Content Warning and Another Crab's Treasure makers, the developers wanted to capture a similar feeling but about surviving with friends. There's no horror element to up the ante; instead, the antagonist is the climb—not any sort of looming monster. The mountains themselves are there to lure the player into taking risks and making mistakes, Drew said. "Almost every item we add is meant to be tempting to use in a way that's dumb," he said. "Generally, we don't add items that are strictly useful—there are a few, obviously, like the piton or rope. But almost all of them have this built in failure point. You still have to place the piton, and it takes time, and you could fall while you're doing it. The rope, you still have to spool out a certain length, and you don't really have a great idea of how much you're spooling out. They're all intentionally chosen so that your friends are yelling at you because you didn't spool out enough rope when they climb."A tonal change for the genreChu told Game Developer that Voyagers of Nera, which is slated for early access in mid-September, will deliver similar sorts of moments. Voyagers of Nera, however, doesn't necessarily fit under the "genre" of friendslop, despite its similarities in goal—to create moments for friends."We built the game to deliver memorable moments with friends (what Peak and Content Warning thrive on), but our game focuses on a different flavor. We wanted to create moments of shared discovery as players explore far off locations together, rescue magical spirits, and battle deep sea monsters. We think there's something romantic and familiar about starting out at the horizon on the ocean and wondering, ‘What's out there?'"Voyagers of Nera isn't necessarily built upon the performance and humor of a game like Peak or Content Warning, but still hinges on gameplay that encourages socialization. Peak and other games more firmly situated in the friendslop world seemingly rely more on sessions than an ongoing experience; Peak's maps rotate every 24 hours, something that Drew said was a technical constraint rather than an intentional decision. But it's a constraint that Drew said plays into the excitement of playing Peak: People want to see and try the different maps. The rotations become like Destiny raids, he said. There's some fear of missing out on some unique experience, but it's not so exclusive that it drives people away—there's always another day. "It will feel like a special thing every time you [play]," Drew added.Voyagers of Nera, on the other hand, is a shared world that lives on a server and up to 10 people per session can come back to it, like Palworld or Rust. Regardless of the differences, though, the core of the experience remains: "Games have become a default way to spend quality time with friends," Chu said. And games that are built to support these relationships are more popular than ever.
    Like
    Love
    Wow
    Sad
    Angry
    39
    · 2 Comments ·0 Shares
  • 8 games have pushed publishing dates in response to Silksong

    At least eight developers have decided to push their projects' publishing dates after Team Cherry announced that Hollow Knight: Silksong is releasing on September 4.Exactly a week ago, the studio shared the news via its own YouTube channel. During the past seven days, multiple developers announced delays for their game and demo releases, directly mentioning Silksong's popularity.At the time of writing, this includes Aeterna Lucis by Aeternum Game Studios, Stomp and the Sword of Miracles by Frogteam Games, CloverPit by Panik Arcade, Demonschool by Necrosoft Games, Little Witch in the Woods by Sunny Side Up, Faeland by Talegames, Megabonk by Vedinad, and Baby Steps, which is being published by Devolver Digital.The release of Little Witch in the Woods initially matched Silksong's. In the delay announcement, Sunny Side Up said it's moving the launch to September 15, given the "immense influence" of Team Cherry's game. "We fear that launching Little Witch in the Woods on the same day would not only dishearten our dedicated team but also disappoint our devoted audience," the studio wrote.'We would not be doing our game any favors by wading into waters we can clearly see are blood red'Both CloverPit and Demonschool, now releasing on September 26 and November 19, respectively, were initially locked in for a launch on September 3, the day before Silksong's release.Related:"Silksong is the most anticipated and wishlisted game on all of Steam and we think people will love this game and play it right at launchbut that also means it will overshadow all games launching close to it," Panik Arcade wrote. Via GamesIndustryBiz, almost 5 million people have wishlisted Silksong. "So if we stick to our original date we would risk the launch of CloverPit a fair bit."Ysbryd Games, the publisher of Demonschool, called the process behind the decision an "anguished consideration," saying that it's "reasonably qualified to say that at any point of 2025 on balance, has been or will be as brutal as market conditions can get" when it comes to picking a release date."Crueler still, that we should find out with such short notice that Hollow Knight: Silksong will launch just one day after our planned release for Demonschool," the publisher added.Via Bluesky, Necrosoft Games said that the delay "was not our choice," but that the studio "understands why the choice was made." Necrosoft said that Ysbryd is paying for the delay. "Dropping the GTA of indies with 2 weeks notice makes everyone freak," the developer added.Related:"We have to remind ourselves that gaining visibility for Demonschool is our main goal," continues the publisher's statement. "Thus, the Ysbryd team strongly believes we would not be doing our game any favors by wading into waters we can clearly see are blood red."The other four games were all slated to release sometime in September, but decided to move either later in the month or directly into 2026, as is the case for Aeterna Lucis.  Out of all developers, Stomp and the Sword of Miracles and Faeland are notable examples, considering both games also fall into the metroidvania genreFrogteam Games planned to release a demo for Stomp and the Sword of the Miracles on August 29, with a Kickstarter campaign launching on September 12. Now, the developer is unsure about when it'll resume these plans."Trying to market an indie game is already really, really hard," Frogteam wrote. "It's the task of trying to get attention in a deep sea of other amazing games. In the case of Silksong, however, I feel like a little krill trying not to get eaten by a blue whale. Tiny devs like me rely on word of mouth and streamers to bring in visibility, and everyone's gonna be busy with Silksong for quite a while."Related:
    #games #have #pushed #publishing #dates
    8 games have pushed publishing dates in response to Silksong
    At least eight developers have decided to push their projects' publishing dates after Team Cherry announced that Hollow Knight: Silksong is releasing on September 4.Exactly a week ago, the studio shared the news via its own YouTube channel. During the past seven days, multiple developers announced delays for their game and demo releases, directly mentioning Silksong's popularity.At the time of writing, this includes Aeterna Lucis by Aeternum Game Studios, Stomp and the Sword of Miracles by Frogteam Games, CloverPit by Panik Arcade, Demonschool by Necrosoft Games, Little Witch in the Woods by Sunny Side Up, Faeland by Talegames, Megabonk by Vedinad, and Baby Steps, which is being published by Devolver Digital.The release of Little Witch in the Woods initially matched Silksong's. In the delay announcement, Sunny Side Up said it's moving the launch to September 15, given the "immense influence" of Team Cherry's game. "We fear that launching Little Witch in the Woods on the same day would not only dishearten our dedicated team but also disappoint our devoted audience," the studio wrote.'We would not be doing our game any favors by wading into waters we can clearly see are blood red'Both CloverPit and Demonschool, now releasing on September 26 and November 19, respectively, were initially locked in for a launch on September 3, the day before Silksong's release.Related:"Silksong is the most anticipated and wishlisted game on all of Steam and we think people will love this game and play it right at launchbut that also means it will overshadow all games launching close to it," Panik Arcade wrote. Via GamesIndustryBiz, almost 5 million people have wishlisted Silksong. "So if we stick to our original date we would risk the launch of CloverPit a fair bit."Ysbryd Games, the publisher of Demonschool, called the process behind the decision an "anguished consideration," saying that it's "reasonably qualified to say that at any point of 2025 on balance, has been or will be as brutal as market conditions can get" when it comes to picking a release date."Crueler still, that we should find out with such short notice that Hollow Knight: Silksong will launch just one day after our planned release for Demonschool," the publisher added.Via Bluesky, Necrosoft Games said that the delay "was not our choice," but that the studio "understands why the choice was made." Necrosoft said that Ysbryd is paying for the delay. "Dropping the GTA of indies with 2 weeks notice makes everyone freak," the developer added.Related:"We have to remind ourselves that gaining visibility for Demonschool is our main goal," continues the publisher's statement. "Thus, the Ysbryd team strongly believes we would not be doing our game any favors by wading into waters we can clearly see are blood red."The other four games were all slated to release sometime in September, but decided to move either later in the month or directly into 2026, as is the case for Aeterna Lucis.  Out of all developers, Stomp and the Sword of Miracles and Faeland are notable examples, considering both games also fall into the metroidvania genreFrogteam Games planned to release a demo for Stomp and the Sword of the Miracles on August 29, with a Kickstarter campaign launching on September 12. Now, the developer is unsure about when it'll resume these plans."Trying to market an indie game is already really, really hard," Frogteam wrote. "It's the task of trying to get attention in a deep sea of other amazing games. In the case of Silksong, however, I feel like a little krill trying not to get eaten by a blue whale. Tiny devs like me rely on word of mouth and streamers to bring in visibility, and everyone's gonna be busy with Silksong for quite a while."Related: #games #have #pushed #publishing #dates
    8 games have pushed publishing dates in response to Silksong
    www.gamedeveloper.com
    At least eight developers have decided to push their projects' publishing dates after Team Cherry announced that Hollow Knight: Silksong is releasing on September 4.Exactly a week ago, the studio shared the news via its own YouTube channel. During the past seven days, multiple developers announced delays for their game and demo releases, directly mentioning Silksong's popularity.At the time of writing, this includes Aeterna Lucis by Aeternum Game Studios, Stomp and the Sword of Miracles by Frogteam Games, CloverPit by Panik Arcade, Demonschool by Necrosoft Games, Little Witch in the Woods by Sunny Side Up, Faeland by Talegames, Megabonk by Vedinad, and Baby Steps, which is being published by Devolver Digital.The release of Little Witch in the Woods initially matched Silksong's. In the delay announcement, Sunny Side Up said it's moving the launch to September 15, given the "immense influence" of Team Cherry's game. "We fear that launching Little Witch in the Woods on the same day would not only dishearten our dedicated team but also disappoint our devoted audience," the studio wrote.'We would not be doing our game any favors by wading into waters we can clearly see are blood red'Both CloverPit and Demonschool, now releasing on September 26 and November 19, respectively, were initially locked in for a launch on September 3, the day before Silksong's release.Related:"Silksong is the most anticipated and wishlisted game on all of Steam and we think people will love this game and play it right at launch (including us) but that also means it will overshadow all games launching close to it," Panik Arcade wrote. Via GamesIndustryBiz, almost 5 million people have wishlisted Silksong. "So if we stick to our original date we would risk the launch of CloverPit a fair bit."Ysbryd Games, the publisher of Demonschool, called the process behind the decision an "anguished consideration," saying that it's "reasonably qualified to say that at any point of 2025 on balance, has been or will be as brutal as market conditions can get" when it comes to picking a release date."Crueler still, that we should find out with such short notice that Hollow Knight: Silksong will launch just one day after our planned release for Demonschool," the publisher added.Via Bluesky, Necrosoft Games said that the delay "was not our choice," but that the studio "understands why the choice was made." Necrosoft said that Ysbryd is paying for the delay. "Dropping the GTA of indies with 2 weeks notice makes everyone freak [out]," the developer added.Related:"We have to remind ourselves that gaining visibility for Demonschool is our main goal," continues the publisher's statement. "Thus, the Ysbryd team strongly believes we would not be doing our game any favors by wading into waters we can clearly see are blood red."The other four games were all slated to release sometime in September, but decided to move either later in the month or directly into 2026, as is the case for Aeterna Lucis.  Out of all developers, Stomp and the Sword of Miracles and Faeland are notable examples, considering both games also fall into the metroidvania genre (similar to Silksong.)Frogteam Games planned to release a demo for Stomp and the Sword of the Miracles on August 29, with a Kickstarter campaign launching on September 12. Now, the developer is unsure about when it'll resume these plans."Trying to market an indie game is already really, really hard," Frogteam wrote. "It's the task of trying to get attention in a deep sea of other amazing games. In the case of Silksong, however, I feel like a little krill trying not to get eaten by a blue whale. Tiny devs like me rely on word of mouth and streamers to bring in visibility, and everyone's gonna be busy with Silksong for quite a while."Related:
    Like
    Love
    Wow
    Sad
    Angry
    64
    · 2 Comments ·0 Shares
  • كاش واحد يحب البرمجة ويحب جافا؟! اليوم جبتلكم لمحة على «Java 16 Rundown, First Of Java 17»!

    في المقال، نتكلموا على الإضافات الجديدة اللي خرجت مع Java 16، كيما records، Stream APIs، وUnix Domain Socket support. يعني بزاف أشياء مليحة رح تسهل علينا الشغل! وما ننسوش نطلوا على أول نظرة لـ Java 17، اللي رايحة تكون نقطة تحول جديدة.

    شخصيا، كي جربت الـ records في مشروعي الأخير، حسيت بفرق كبير في تنظيم الكود. صراحة، جافا تطورت بشكل رهيب، وكل تحديث يفتح لنا أبواب جديدة.

    خلونا نكونوا دايما متطلعين على الجديد ونستثمروا في قدراتنا!

    https://nipafx.dev/inside-java-newscast-1
    #جافا #برمجة #Java #تكنولوجيا #Innovation
    🎉 كاش واحد يحب البرمجة ويحب جافا؟! 😂 اليوم جبتلكم لمحة على «Java 16 Rundown, First Of Java 17»! في المقال، نتكلموا على الإضافات الجديدة اللي خرجت مع Java 16، كيما records، Stream APIs، وUnix Domain Socket support. يعني بزاف أشياء مليحة رح تسهل علينا الشغل! وما ننسوش نطلوا على أول نظرة لـ Java 17، اللي رايحة تكون نقطة تحول جديدة. شخصيا، كي جربت الـ records في مشروعي الأخير، حسيت بفرق كبير في تنظيم الكود. صراحة، جافا تطورت بشكل رهيب، وكل تحديث يفتح لنا أبواب جديدة. خلونا نكونوا دايما متطلعين على الجديد ونستثمروا في قدراتنا! https://nipafx.dev/inside-java-newscast-1 #جافا #برمجة #Java #تكنولوجيا #Innovation
    nipafx.dev
    Java 16 got released, so I go over most of the additions like records, Stream APIs, Unix Domain Socket support, and much more. Then there's a first glimpse at Java 17.
    Like
    Wow
    Love
    Sad
    34
    · 1 Comments ·0 Shares
  • تحبوا تعرفوا كيفاش الصين تحوس تبقى في القمة في عالم السيارات الكهربائية بلا ما تدخل في حروب أسعار؟

    المقال هذا يتحدث عن كيفية الحفاظ على الهيمنة في السوق، من دون الدخول في صراعات تخفيض الأسعار اللي تخلي الكل يضيع. يعني، الصين عندها خطط مرتبة من بينها الابتكار وتحسين الجودة، بدل ما تحارب في الأسعار. كيفاش يحوسوا يربحوا السوق بلا ما يطيحوا من قيمة المنتج؟

    شخصياً، يحيرني كيفاش بعض الشركات تفضل تخفيض الأسعار بدل ما تكون عندها نظرة استراتيجية بعيدة المدى. في النهاية، الجودة هي اللي تدوم والتخفيضات وقتية فقط!

    عيشوا الميدان، وخلونا نفكروا في المستقبل!

    https://forbesmiddleeast.com/industry/automotive-and-ev/china-wants-to-maintain-ev-dominance-without-price-warsheres-how
    #السيارات_الكهربائية #الصين #Innovation #QualityMatters #FutureMobility
    🌍💡 تحبوا تعرفوا كيفاش الصين تحوس تبقى في القمة في عالم السيارات الكهربائية بلا ما تدخل في حروب أسعار؟ 🤔🚗 المقال هذا يتحدث عن كيفية الحفاظ على الهيمنة في السوق، من دون الدخول في صراعات تخفيض الأسعار اللي تخلي الكل يضيع. يعني، الصين عندها خطط مرتبة من بينها الابتكار وتحسين الجودة، بدل ما تحارب في الأسعار. كيفاش يحوسوا يربحوا السوق بلا ما يطيحوا من قيمة المنتج؟ شخصياً، يحيرني كيفاش بعض الشركات تفضل تخفيض الأسعار بدل ما تكون عندها نظرة استراتيجية بعيدة المدى. في النهاية، الجودة هي اللي تدوم والتخفيضات وقتية فقط! 🤷‍♂️💰 عيشوا الميدان، وخلونا نفكروا في المستقبل! https://forbesmiddleeast.com/industry/automotive-and-ev/china-wants-to-maintain-ev-dominance-without-price-warsheres-how #السيارات_الكهربائية #الصين #Innovation #QualityMatters #FutureMobility
    forbesmiddleeast.com
    China Wants To Maintain EV Dominance Without Price Wars—Here's How
    Like
    Wow
    Love
    Sad
    Angry
    147
    · 1 Comments ·0 Shares
  • وش راكم يا الأصدقاء؟

    اليوم حبيت نشارك معاكم موضوع يخص التقنيات الحديثة، خاصةً في عالم Amazon Bedrock. المقال يتحدث على كيفاش تقدروا تخلقوا أسماء دومين مخصصة لوكالات AgentCore Runtime باستخدام CloudFront كـ reverse proxy. الفايدة هنا كبيرة: سهولة في التكامل، أسماء دومين تتماشى مع مؤسستك، بنية تحتية أنظف، وصيانة بسيطة لما تحتاجوا تحديثات.

    صراحة، كي كنت نخدم على هاد المواضيع، حسيت بقداش هاد الحلول تسهل علينا الخدمة و تخليها أكثر تنظيم.

    علاش ما نجربوش هاد التقنية مع بعضنا ونشوفوا الفروقات؟

    https://aws.amazon.com/blogs/machine-learning/set-up-custom-domain-names-for-amazon-bedrock-agentcore-runtime-agents/
    #AmazonWebServices #CloudFront #TechInnovation #دومين #التقنية
    🚀 وش راكم يا الأصدقاء؟ اليوم حبيت نشارك معاكم موضوع يخص التقنيات الحديثة، خاصةً في عالم Amazon Bedrock. المقال يتحدث على كيفاش تقدروا تخلقوا أسماء دومين مخصصة لوكالات AgentCore Runtime باستخدام CloudFront كـ reverse proxy. الفايدة هنا كبيرة: سهولة في التكامل، أسماء دومين تتماشى مع مؤسستك، بنية تحتية أنظف، وصيانة بسيطة لما تحتاجوا تحديثات. صراحة، كي كنت نخدم على هاد المواضيع، حسيت بقداش هاد الحلول تسهل علينا الخدمة و تخليها أكثر تنظيم. علاش ما نجربوش هاد التقنية مع بعضنا ونشوفوا الفروقات؟ https://aws.amazon.com/blogs/machine-learning/set-up-custom-domain-names-for-amazon-bedrock-agentcore-runtime-agents/ #AmazonWebServices #CloudFront #TechInnovation #دومين #التقنية
    aws.amazon.com
    In this post, we show you how to create custom domain names for your Amazon Bedrock AgentCore Runtime agent endpoints using CloudFront as a reverse proxy. This solution provides several key benefits: simplified integration for development teams, cust
    Like
    Love
    Wow
    Angry
    Sad
    145
    · 1 Comments ·0 Shares
  • Every Sakamoto Days Main Character's Ages, Heights, And Birthdays

    Yuto Suzuki's Sakamoto Days is one of the best shonen manga of the 2020s, and it is only going from strength to strength. Along with great art and creative fight sequences, the series is also known for fantastic characters, be they awesomeheroes or intimidatingvillains.
    #every #sakamoto #days #main #character039s
    Every Sakamoto Days Main Character's Ages, Heights, And Birthdays
    Yuto Suzuki's Sakamoto Days is one of the best shonen manga of the 2020s, and it is only going from strength to strength. Along with great art and creative fight sequences, the series is also known for fantastic characters, be they awesomeheroes or intimidatingvillains. #every #sakamoto #days #main #character039s
    Every Sakamoto Days Main Character's Ages, Heights, And Birthdays
    gamerant.com
    Yuto Suzuki's Sakamoto Days is one of the best shonen manga of the 2020s, and it is only going from strength to strength. Along with great art and creative fight sequences, the series is also known for fantastic characters, be they awesome (and funny) heroes or intimidating (and funny) villains.
    Like
    Love
    Wow
    Sad
    Angry
    247
    · 2 Comments ·0 Shares
  • Watch Mr. Snake Get "Split in Half" in This Behind-the-Scenes Demo from The Bad Guys 2

    The topic of cheating in 3D animation will likely always be one of the most fascinating parts of the craft. The notion that, unlike in games, for example, 3D animations can be bent and twisted in all sorts of ways behind the scenes, coupled with the fact that these mesmerizing imperfections usually remain unseen, makes it all the more exciting when distorted models and messed-up rigs do come to light.Joining our growing library of such showcases is this impressive under-the-hood demo featuring Mr. Snake from The Bad Guys 2 being "split in half," figuratively speaking, shared by DreamWorks Animator David Guo. To make the character slither convincingly on screen, the creator simply used two rigs and skillfully hid the connection out of sight, giving the illusion of a single unified model. "The key to making him feel like a single, solid character was to make sure the overall 'volume' of his body felt consistent: when one part of the body enters, it pulls on another area of the body," the author notes.Earlier, Guo also compared this particular scene from the movie to how it looked during the blocking stage, for a more extensive behind-the-scenes look at some of DreamWorks' films, we highly encourage you to visit the artist's Instagram page:And here are some more mesmerizing instances of cheating in 3D animation for your viewing pleasure:Here's How You Can Cut Corners & Cheat in 3D AnimationBreaking 3D Models to Achieve the Perfect Animation ShotDisney Animator Shows That It's OK to Break Things to Get a Good ResultPixar Animator Shows How 3D Character Mouths Look From Different AnglesArtists Share Broken 3D Models That Look Good on CameraIf you would like to learn more about how 3D Animators cut corners, we also recommend checking out our interview with Kevin Temmer, the Lead Animator at Glitch Production, who told us more about The Amazing Digital Circus' animation workflows, explained what "cheating" in 3D animation is, and discussed how they twist TADC's characters behind the scenes.Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    #watch #snake #get #quotsplit #halfquot
    Watch Mr. Snake Get "Split in Half" in This Behind-the-Scenes Demo from The Bad Guys 2
    The topic of cheating in 3D animation will likely always be one of the most fascinating parts of the craft. The notion that, unlike in games, for example, 3D animations can be bent and twisted in all sorts of ways behind the scenes, coupled with the fact that these mesmerizing imperfections usually remain unseen, makes it all the more exciting when distorted models and messed-up rigs do come to light.Joining our growing library of such showcases is this impressive under-the-hood demo featuring Mr. Snake from The Bad Guys 2 being "split in half," figuratively speaking, shared by DreamWorks Animator David Guo. To make the character slither convincingly on screen, the creator simply used two rigs and skillfully hid the connection out of sight, giving the illusion of a single unified model. "The key to making him feel like a single, solid character was to make sure the overall 'volume' of his body felt consistent: when one part of the body enters, it pulls on another area of the body," the author notes.Earlier, Guo also compared this particular scene from the movie to how it looked during the blocking stage, for a more extensive behind-the-scenes look at some of DreamWorks' films, we highly encourage you to visit the artist's Instagram page:And here are some more mesmerizing instances of cheating in 3D animation for your viewing pleasure:Here's How You Can Cut Corners & Cheat in 3D AnimationBreaking 3D Models to Achieve the Perfect Animation ShotDisney Animator Shows That It's OK to Break Things to Get a Good ResultPixar Animator Shows How 3D Character Mouths Look From Different AnglesArtists Share Broken 3D Models That Look Good on CameraIf you would like to learn more about how 3D Animators cut corners, we also recommend checking out our interview with Kevin Temmer, the Lead Animator at Glitch Production, who told us more about The Amazing Digital Circus' animation workflows, explained what "cheating" in 3D animation is, and discussed how they twist TADC's characters behind the scenes.Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #watch #snake #get #quotsplit #halfquot
    Watch Mr. Snake Get "Split in Half" in This Behind-the-Scenes Demo from The Bad Guys 2
    80.lv
    The topic of cheating in 3D animation will likely always be one of the most fascinating parts of the craft. The notion that, unlike in games, for example, 3D animations can be bent and twisted in all sorts of ways behind the scenes, coupled with the fact that these mesmerizing imperfections usually remain unseen, makes it all the more exciting when distorted models and messed-up rigs do come to light.Joining our growing library of such showcases is this impressive under-the-hood demo featuring Mr. Snake from The Bad Guys 2 being "split in half," figuratively speaking, shared by DreamWorks Animator David Guo. To make the character slither convincingly on screen, the creator simply used two rigs and skillfully hid the connection out of sight, giving the illusion of a single unified model. "The key to making him feel like a single, solid character was to make sure the overall 'volume' of his body felt consistent: when one part of the body enters, it pulls on another area of the body," the author notes.Earlier, Guo also compared this particular scene from the movie to how it looked during the blocking stage, for a more extensive behind-the-scenes look at some of DreamWorks' films, we highly encourage you to visit the artist's Instagram page:And here are some more mesmerizing instances of cheating in 3D animation for your viewing pleasure:Here's How You Can Cut Corners & Cheat in 3D AnimationBreaking 3D Models to Achieve the Perfect Animation ShotDisney Animator Shows That It's OK to Break Things to Get a Good ResultPixar Animator Shows How 3D Character Mouths Look From Different AnglesArtists Share Broken 3D Models That Look Good on CameraIf you would like to learn more about how 3D Animators cut corners, we also recommend checking out our interview with Kevin Temmer, the Lead Animator at Glitch Production, who told us more about The Amazing Digital Circus' animation workflows, explained what "cheating" in 3D animation is, and discussed how they twist TADC's characters behind the scenes.Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    Like
    Love
    Wow
    Sad
    Angry
    556
    · 2 Comments ·0 Shares
  • Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum

    IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone  Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers,  small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins
    #designing #atmospheric #wwi #plane #crash
    Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum
    IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone  Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers,  small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins #designing #atmospheric #wwi #plane #crash
    Designing Atmospheric WWI Plane Crash Scene In Abandoned German Asylum
    80.lv
    IntroductionHi everyone, I'm Leandro Grasso, a 3D Environment Artist from Sicily. My journey into 3D art began after the COVID period, sparked by my passion for landscape photography. Recently, I completed a mentorship with Jeremy Cerisy, during which I significantly improved my environment creation skills. I learned a lot and was able to apply that knowledge to my most recent project. As a freelance artist, I've contributed to a couple of NDA projects, and I'm currently working on an environment for an indie video game scheduled for release later this year. PlanningUnder the direction of my mentor, I scouted for real-life locations and imagined how they could be interpreted for a video game environment, rather than starting from a concept. My main goal was to improve my skills in creating destroyed environments, learning how to handle damaged walls, cracked pavements, and abandoned objects.So, I decided to create an old abandoned asylum in Germany and added a crashed World War I aircraft to introduce new challenges and storytelling opportunities. Through this combination, I aimed to study destruction while also suggesting a narrative about what might have happened at the site after the crash. Below, you can see some of the references I used for the asylum and how I planned it. Blockout & CompositionI started with a simple blockout in Unreal Engine 5. While building the blockout, I frequently used the mannequin to ensure proper proportions. Once the basic layout was in place, I placed several cameras to find the best compositions and give the environment the right sense of depth, especially considering the limited space available for movement.After that, I exported the entire blockout to Blender and began dividing it into different pieces to plan out the modules and props. I was able to properly plan these elements after creating an advanced blockout, where I also applied some basic textures to see how the environment reacted to different colors and materials.Asset Production WorkflowOnce the blockout was complete, I started modeling the modular pieces based on the needs of the environment. I created modules of various sizes, ranging from 1 to 4 meters, for the main elements like simple walls. For more complex parts, such as the stair walls, I took a different approach and created larger, non-repeating modules.Speaking of modules, I want to highlight the destroyed wall caused by the aircraft crash. I used a Boolean operation to cut out the damaged section of the wall and the wood. After that, I created individual bricks and placed them along the broken edges to add more realism and detail. Connected to that wall, the modular stairs I created were designed to fit the ideal layout of a game level. To maintain the correct proportions, I used the default stairs in Unreal as a reference and then modeled them in Blender.As for the railing, to save time, I first broke it down into main components and created instances of those pieces. Once the entire railing was modeled and the UVs were ready, I made the instances real so I could unwrap all the pieces in one go. After unwrapping, I moved the UV islands randomly to introduce variation during the texturing phase.For the vegetation, I used assets from Quixel Megascans. Since the pack didn’t include vertical vegetation, I sourced a different ivy asset that contained vertical elements. I removed the leaves and kept only the branches.Then, using a particle system, I added the correct leaves onto the vertical branches, scattering them only at the tips by using a vertex group. Here are the vertical assets I created, with a small detail asset shown in the top left.Regarding the assets, I didn't use high-to-low poly baking in this project. Instead, I modeled everything in mid-poly to save time while still maintaining good visual quality.One of the biggest challenges was modeling the destroyed World War I aircraft. As a junior artist, it was my first time working on a damaged vehicle. I began by modeling the aircraft fully intact and then manually destroyed it piece by piece to achieve a more realistic and intentional look. To guide me through the process, I looked to industry professionals for inspiration. I found some amazing vehicle models by Pavlo Panchenko for S.T.A.L.K.E.R. 2: Heart of Chornobyl on ArtStation. Being able to study his work helped me a lot, not just technically, but also in defining the artistic direction for my own piece.Last but not least, I wanted to talk about the broken glass pieces I created. I made them in ZBrush, starting with a random image of broken glass I found on Google. I brought the image into Photoshop, converted it to black and white, and increased the contrast to make the cracks more visible.Then, I imported the image into ZBrush, subdivided a plane several times, and used the image as a mask. I hid the unnecessary parts and deleted them, keeping only the masked glass shapes. After that, I decimated the mesh to reach an acceptable polycount, imported it into Blender, and created the UVs. All UVs were unwrapped in Blender. I used Texel Density Checker to set a texel density of 512 px/m with a texture size of 2048. For this project, I used three UV channels: the first for the RGB mask, the second for tileable textures, to maintain high quality during the texturing phase, and the third for additional normal maps where needed. This setup allowed me to reuse the same textures, such as metal, rust, and wood, across both modules and assets. I also used RGB masks for the assets, so the UV islands were specifically packed into that channel.TexturingFor the texturing, I wanted to experiment with a workflow I hadn't tried before. The entire project was textured using Vertex Painting, RGB Masks, and tileable textures. I didn't use any unique baked textures.Tilable textures allowed me to maintain high quality even on large modules and props. Vertex Painting was used to add variation across surfaces, while RGB Masks provided additional layers of variation, especially on props. I also used decals and normal edge decals to add extra detail and break up the surfaces further.Below, you can see my master material setup, which includes Parallax, Vertex Color blending with a HeightLerp node, and RGB Mask blending using a simple Lerp node. All the textures used in my environment were sourced from Quixel Megascans, except for two tileable textures that I created specifically for this project. I made these two textures from scratch in Substance 3D Designer.I'd like to talk about my stained glass and explain how I achieved the final result. First, I took a photo of a real stained glass window from the actual location. Using the Warp tool in Photoshop, I straightened the image and then exported it.Next, I imported it into Blender and began modeling the metal framework that separates the glass pieces. Once that was complete, I rendered the shape in orthographic view with a black background and a white emissive material applied to the metal. I then cleaned up the render in Photoshop and brought it into Substance 3D Designer, where I used it as a mask to create the final stained glass texture. Once my textures were ready, I used a pre-made master material from the Advanced Glass Material Pack, free on FAB, and customized it to suit the needs of my stained glass.For the normal edge decals, I improved my workflow compared to my previous project by sculpting four different corner variations. Once the sculpts were complete, I imported them separately and baked them in Substance 3D Painter to avoid halos on the edges of the bakes. This approach allowed me to skip any cleanup in Photoshop. I only used Photoshop to combine all the baked corners into a single normal texture, as shown below. Last but not least, I'm really happy with how this decal turned out in the project. When I saw it in the main reference, I immediately knew I wanted to include it in my environment.I imported the reference image into Photoshop, straightened it using the Warp tool, and used the Clone  Stamp and Content-Aware Fill to fix some damaged areas. Then, I took a screenshot of the wall in Unreal Engine with only the albedo visualization enabled, and used it in Photoshop as the base layer for the mural. I tweaked the blending modes to extract imperfections from the albedo texture and created a custom mask with brush strokes to blend the mural naturally into the wall. This is the result. CompositionWhen it comes to composition, my background in photography helped me a lot with setting up cameras. I defined a few key shots early on and added more as the environment progressed and came together. Since I was working on an indoor scene, I chose to use a wide-angle lens to capture more of the space, and also included a zoomed-in shot, like the one of the wheelchair, to create a stronger sense of depth.To support the composition, I scattered various details throughout the environment, such as debris, papers,  small pieces of glass, and other elements to enhance storytelling and realism.LightingFor the lighting, I used an add-on for Unreal Engine called Ultra Dynamic Sky to give the scene a natural base lighting pass. After that, I added Rectlights to emphasize certain areas of the environment, slightly tweaking their indirect lighting bounces.I also placed some ivy in front of the spotlights to fake subtle shadow patterns and add more visual interest. For color grading, I used a LUT. I first rendered a single frame and imported it into DaVinci Resolve, where I applied a LUT I liked. Once I was happy with the result, I copied the settings to the RGBTable16x1 texture, which starts with a neutral look by default. For the final render, I exported the project in EXR format using PIZ Multilayer compression, with Spatial Sample Count set to 1 and Temporal Sample Count set to 64. I also used a Warm Up Count of 120 for both the Render and Engine to ensure the exposure was correctly stabilized from the beginning of the render. Additionally, I applied several console variables to improve the final image quality. ConclusionAnd here we are at the end. This project was one of my portfolio pieces developed under the mentorship of Jeremy Cerisy, who helped me a lot with his feedback and really opened my mind to how to approach level and environment creation. It took me about three and a half months to complete.Even though I aimed to work more efficiently on this environment, I still lost a lot of time at the beginning, mainly because I wasn’t sure which workflow to use for texturing, what I needed to create from scratch, and what I could reuse across the scene. In the end, it became a learning-by-doing process, constantly planning and adapting as I added new techniques I was picking up along the way. One thing I really enjoyed was understanding the connection between level design and environment art, it's fascinating to create a space that not only looks good but also serves gameplay. I learned a lot from this project, but one of the most valuable lessons was this: don't waste too much time on tiny details players will never notice, instead, focus on the overall composition and visual impact, especially from the player's point of view.My advice to anyone starting out in environment art is to stay organized in every phase, especially when it comes to setting personal deadlines. Otherwise, there’s a real risk of dragging the project out much longer than necessary. As a junior artist, I know how tough the industry can feel, especially with all the layoffs in recent months, but don't lose faith. That moment when you get hired will come, as long as you keep putting in the effort and continue creating.Lastly, I want to thank my mentor, Jeremy Cerisy, for guiding me through this project with his invaluable feedback. A special thanks also goes to Alberto Casu, Alex Gallucci, and Andrea Siviero for their extra feedback during my spare time. And finally, thank you to everyone who made it this far and showed interest in my project!Leandro Grasso, 3D Environment ArtistInterview conducted by Emma Collins
    Like
    Love
    Wow
    Sad
    Angry
    469
    · 2 Comments ·0 Shares
  • هل أنت مستعد تغير اللعبة في TechCrunch Disrupt 2025؟

    يا جماعة، عندكم فرصة ذهبية لاستضافة حدث جانبي (Side Event) خلال أسبوع TechCrunch Disrupt اللي رح يكون من 25 إلى 31 أكتوبر. هذا الحدث مش غير فرصة لعرض مشروعكم أو فكرتكم، لكن كمان يقدر يرفع من مستوى العلامة التجارية تاعكم. بليز، لا تفوتوا الفرصة وقدموا قبل الموعد النهائي!

    شخصيا، أذكر أول مرة شاركت فيها في حدث مشابه، كانت تجربة رائعة وفتحت لي أبواب جديدة. هناك طاقة خاصة تجمع الناس اللي عندهم نفس الاهتمام.

    خلّينا نفكر سوا، كيف تقدروا تستغلوا هذه الفرصة لتبرزوا أفكاركم وتواصلوا مع مجتمع التكنولوجيا؟

    https://techcrunch.com/2025/08/29/host-an-event-beyond-the-main-event-apply-to-host-a-side-event-at-techcrunch-disrupt-2025/

    #TechCrunch #SideEvent #Innovation #Entrepreneurs #Disrupt2025
    🌟 هل أنت مستعد تغير اللعبة في TechCrunch Disrupt 2025؟ 🚀 يا جماعة، عندكم فرصة ذهبية لاستضافة حدث جانبي (Side Event) خلال أسبوع TechCrunch Disrupt اللي رح يكون من 25 إلى 31 أكتوبر. هذا الحدث مش غير فرصة لعرض مشروعكم أو فكرتكم، لكن كمان يقدر يرفع من مستوى العلامة التجارية تاعكم. بليز، لا تفوتوا الفرصة وقدموا قبل الموعد النهائي! شخصيا، أذكر أول مرة شاركت فيها في حدث مشابه، كانت تجربة رائعة وفتحت لي أبواب جديدة. هناك طاقة خاصة تجمع الناس اللي عندهم نفس الاهتمام. خلّينا نفكر سوا، كيف تقدروا تستغلوا هذه الفرصة لتبرزوا أفكاركم وتواصلوا مع مجتمع التكنولوجيا؟ https://techcrunch.com/2025/08/29/host-an-event-beyond-the-main-event-apply-to-host-a-side-event-at-techcrunch-disrupt-2025/ #TechCrunch #SideEvent #Innovation #Entrepreneurs #Disrupt2025
    techcrunch.com
    Apply to host a Side Event during TechCrunch Disrupt week (October 25–31) and amplify your brand. Apply before the deadline.
    Like
    Love
    Wow
    Sad
    Angry
    862
    · 1 Comments ·0 Shares
More Results
ollo https://www.ollo.ws