• يا جماعة، عندي خبر زين حاب نشاركوه معاكم!

    الموديلات الجديدة من Mercury و Mercury Coder من Inception Labs راهي متاحة دابا في Amazon Bedrock Marketplace و Amazon SageMaker JumpStart. هاذي الموديلات السريعة بزاف تعتمد على diffusion-based technology، وتقدر تولد حتى 1,100 tokens في الثانية على NVIDIA H100 GPUs! شوفوا كيفاش نقدروا نستخدموها في توليد الكود وخلق أدوات جديدة.

    صراحة، أنا جربت حاجة مشابهة وشفنا كيفاش التكنولوجيا تقدر تحل مشاكل كبيرة وتسرّع المشاريع. بديت نشوف كود يتولد قدامي، كانت تجربة مذهلة!

    خليونا نفكروا كيفاش نستغلوا هاذي التطورات في أعمالنا ومشاريعنا المستقبلية.

    https://aws.amazon.com/blogs/machine-learning/mercury-foundation-models-from-inception-labs-are-now-available-in-amazon-bedrock-marketplace-and-amazon-sagemaker-jumpstart/
    #MercuryModels #InceptionLabs #AI #MachineLearning #AmazonSageMaker
    🚀 يا جماعة، عندي خبر زين حاب نشاركوه معاكم! الموديلات الجديدة من Mercury و Mercury Coder من Inception Labs راهي متاحة دابا في Amazon Bedrock Marketplace و Amazon SageMaker JumpStart. هاذي الموديلات السريعة بزاف تعتمد على diffusion-based technology، وتقدر تولد حتى 1,100 tokens في الثانية على NVIDIA H100 GPUs! شوفوا كيفاش نقدروا نستخدموها في توليد الكود وخلق أدوات جديدة. صراحة، أنا جربت حاجة مشابهة وشفنا كيفاش التكنولوجيا تقدر تحل مشاكل كبيرة وتسرّع المشاريع. بديت نشوف كود يتولد قدامي، كانت تجربة مذهلة! خليونا نفكروا كيفاش نستغلوا هاذي التطورات في أعمالنا ومشاريعنا المستقبلية. https://aws.amazon.com/blogs/machine-learning/mercury-foundation-models-from-inception-labs-are-now-available-in-amazon-bedrock-marketplace-and-amazon-sagemaker-jumpstart/ #MercuryModels #InceptionLabs #AI #MachineLearning #AmazonSageMaker
    aws.amazon.com
    In this post, we announce that Mercury and Mercury Coder foundation models from Inception Labs are now available through Amazon Bedrock Marketplace and Amazon SageMaker JumpStart. We demonstrate how to deploy these ultra-fast diffusion-based language
    Like
    Love
    Wow
    Sad
    Angry
    1KB
    · 1 Commentaires ·0 Parts
  • Creating a Detailed Helmet Inspired by Fallout Using Substance 3D

    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine
    #creating #detailed #helmet #inspired #fallout
    Creating a Detailed Helmet Inspired by Fallout Using Substance 3D
    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter. Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine #creating #detailed #helmet #inspired #fallout
    Creating a Detailed Helmet Inspired by Fallout Using Substance 3D
    80.lv
    IntroductionHi! My name is Pavel Vorobyev, and I'm a 19-year-old 3D Artist specializing in texturing and weapon creation for video games. I've been working in the industry for about 3 years now. During this time, I've had the opportunity to contribute to several exciting projects, including Arma, DayZ, Ratten Reich, and a NEXT-GEN sci-fi shooter (currently under NDA). Here's my ArtStation portfolio.My journey into 3D art began in my early teens, around the age of 13 or 14. At some point, I got tired of just playing games and started wondering: "How are they actually made?" That question led me to explore game development. I tried everything – level design, programming, game design – but it was 3D art that truly captured me.I'm entirely self-taught. I learned everything from YouTube, tutorials, articles, and official documentation, gathering knowledge piece by piece. Breaking into the commercial side of the industry wasn't easy: there were a lot of failures, no opportunities, and no support. At one point, I even took a job at a metallurgical plant. But I kept pushing forward, kept learning and improving my skills in 3D. Eventually, I got my first industry offer – and that's when my real path began.Today, I continue to grow, constantly experimenting with new styles, tools, and techniques. For me, 3D isn't just a profession – it's a form of self-expression and a path toward my dream. My goal is to build a strong career in the game industry and eventually move into cinematic storytelling in the spirit of Love, Death & Robots.Astartes YouTube channelI also want to inspire younger artists and show how powerful texturing can be as a creative tool. To demonstrate that, I'd love to share my personal project PU – Part 1, which reflects my passion and approach to texture art.In this article, I'll be sharing my latest personal project – a semi-realistic sci-fi helmet that I created from scratch, experimenting with both form and style. It's a personal exploration where I aimed to step away from traditional hyperrealism and bring in a touch of artistic expression.Concept & Project IdeaThe idea behind this helmet project came from a very specific goal – to design a visually appealing asset with rich texture variation and achieve a balance between stylization and realism. I wanted to create something that looked believable, yet had an artistic flair. Since I couldn't find any fitting concepts online, I started building the design from scratch in my head. I eventually settled on creating a helmet as the main focus of the project. For visual direction, I drew inspiration from post-apocalyptic themes and the gritty aesthetics of Fallout and Warhammer 40,000.Software & Tools UsedFor this project, I used Blender, ZBrush, Substance 3D Painter, Marmoset Toolbag 5, Photoshop, and RizomUV. I created the low-poly mesh in Blender and developed the concept and high-poly sculpt in ZBrush. In Substance 3D Painter, I worked on the texture concept and final texturing. Baking and rendering were done in Marmoset Toolbag, and I used Photoshop for some adjustments to the bake. UV unwrapping was handled in RizomUV.Modeling & RetopologyI began the development process by designing the concept based on my earlier references – Fallout and Warhammer 40,000. The initial blockout was done in ZBrush, and from there, I started refining the shapes and details to create something visually engaging and stylistically bold.After completing the high-poly model, I moved on to the long and challenging process of retopology. Since I originally came from a weapons-focused background, I applied the knowledge I gained from modeling firearms. I slightly increased the polycount to achieve a cleaner and more appealing look in the final render – reducing visible faceting. My goal was to strike a balance between visual quality and a game-ready asset.UV Mapping & BakingNext, I moved on to UV mapping. There's nothing too complex about this stage, but since my goal was to create a game-ready asset, I made extensive use of overlaps. I did the UVs in Rizom UV. The most important part is to align the UV shells into clean strips and unwrap cylinders properly into straight lines.Once the UVs were done, I proceeded to bake the normal and ambient occlusion maps. At this stage, the key is having clean UVs and solid retopology – if those are in place, the bake goes smoothly. Texturing: Concept & WorkflowNow we move on to the most challenging stage – texturing. I aimed to present the project in a hyperrealistic style with a touch of stylization. This turned out to be quite difficult, and I went through many iterations. The most important part of this phase was developing a solid texture concept: rough decals, color combinations, and overall material direction. Without that foundation, it makes no sense to move forward with the texturing. After a long process of trial and error, I finally arrived at results I was satisfied with.Then I followed my pipeline:1. Working on the base materials2. Storytelling and damage3. Decals4. Spraying, dust, and dirtWorking on the Base MaterialsWhen working on the base materials, the main thing is to work with the physical properties and texture. You need to extract the maximum quality from the generators before manual processing. The idea was to create the feeling of an old, heavy helmet that had lived its life and had previously been painted a different color. To make it battered and, in a sense, rotten.It is important to pay attention to noise maps – Dirt 3, Dirt 6, White Noise, Flakes – and add the feel of old metal with custom Normal Maps. I also mixed in photo textures for a special charm. PhototextureCustom Normal Map TextureStorytelling & DamageGradients play an important role in the storytelling stage. They make the object artistically dynamic and beautiful, adding individual shades that bring the helmet to life.Everything else is done manually. I found a bunch of old helmets from World War II and took alpha damage shots of them using Photoshop. I drew the damage with alphas, trying to clearly separate the material into old paint, new paint, rust, and bare metal.I did the rust using MatFX Rust from the standard Substance 3D Painter library. I drew beautiful patterns using paint in multiply mode – this quickly helped to recreate the rust effect. Metal damage and old paint were more difficult: due to the large number of overlaps in the helmet, I had to carefully draw patterns, minimizing the visibility of overlaps.DecalsI drew the decals carefully, sticking to the concept, which added richness to the texture.Spray Paint & DirtFor spray paint and dirt, I used a long-established weapon template consisting of dust particles, sand particles, and spray paint. I analyzed references and applied them to crevices and logical places where dirt could accumulate.Rendering & Post-ProcessingI rendered in Marmoset Toolbag 5 using a new rendering format that I developed together with the team. The essence of the method is to simulate "RAW frames." Since Marmoset does not have such functions, I worked with the EXR 32-BIT format, which significantly improves the quality of the render: the shadows are smooth, without artifacts and broken gradients. I assembled the scene using Quixel Megascans. After rendering, I did post-processing in Photoshop utilizing Filter Camera Raw. Conclusion & Advice for BeginnersThat's all. For beginners or those who have been unsuccessful in the industry for a long time, I advise you to follow your dream and not listen to anyone else. Success is a matter of time and skill! Talent is not something you are born with; it is something you develop. Work on yourself and your work, put your heart into it, and you will succeed!Pavel Vorobiev, Texture ArtistInterview conducted by Gloria Levine
    Like
    Love
    Wow
    Sad
    Angry
    701
    · 2 Commentaires ·0 Parts
  • تيسلا تقدم عروض استبدال Cybertruck بأكثر من سعر الشراء بسبب خلل مفاجئ

    خلال عطلة نهاية الأسبوع، بدأت تيسلا بعرض قيم استبدال Cybertruck أعلى من سعر الشراء الأصلي، ويبدو أن السبب خلل في نظامها.تقدم الشركة عادةً تقديرات استبدال عبر الإنترنت للأشخاص الراغبين بشراء سيارات جديدة، لكن بعض مالكي Cybertruck فوجئوا بأن التقديرات كانت مرتفعة بشكل غير منطقي، أحيانًا تتجاوز ما دفعوه أصلاً.على سبيل المثال، قدمت تيسلا 79,200 دولار لسيارة 2025 AWD مع 18,000 ميل، و118,800 دولار لنسخة 2024 “Cyberbeast” ثلاثية المحركات، وحتى عرض 11,000 دولار أكثر من السعر الأصلي لمالك إحدى السيارات 2024.تسببت هذه التقديرات في تكهنات بأن النظام قد يكون معطلاً أو أن الشركة تستدعي بعض الطرازات المبكرة، لكن تيسلا أكدت أن السبب هو خلل في النظام ولن تُعتمد هذه الأسعار، مع استرداد رسوم الطلب المعتادة.من المحتمل أن الخلل نشأ أثناء تعديل طريقة حساب قيمة الاستبدال، خاصة مع اختلاف إعفاءات الضرائب بين طرازات 2024 و2025، إضافةً إلى الفروقات في الميزات بين الإصدارات مثل Foundation Series.هذا الخلل ألقى الضوء على تعقيد تقييم Cybertruck مقارنة بالطرازات الجديدة المعروضة بشروط تمويل 0%.المصدر
    #تيسلا #تقدم #عروض #استبدال #cybertruck
    تيسلا تقدم عروض استبدال Cybertruck بأكثر من سعر الشراء بسبب خلل مفاجئ
    خلال عطلة نهاية الأسبوع، بدأت تيسلا بعرض قيم استبدال Cybertruck أعلى من سعر الشراء الأصلي، ويبدو أن السبب خلل في نظامها.تقدم الشركة عادةً تقديرات استبدال عبر الإنترنت للأشخاص الراغبين بشراء سيارات جديدة، لكن بعض مالكي Cybertruck فوجئوا بأن التقديرات كانت مرتفعة بشكل غير منطقي، أحيانًا تتجاوز ما دفعوه أصلاً.على سبيل المثال، قدمت تيسلا 79,200 دولار لسيارة 2025 AWD مع 18,000 ميل، و118,800 دولار لنسخة 2024 “Cyberbeast” ثلاثية المحركات، وحتى عرض 11,000 دولار أكثر من السعر الأصلي لمالك إحدى السيارات 2024.تسببت هذه التقديرات في تكهنات بأن النظام قد يكون معطلاً أو أن الشركة تستدعي بعض الطرازات المبكرة، لكن تيسلا أكدت أن السبب هو خلل في النظام ولن تُعتمد هذه الأسعار، مع استرداد رسوم الطلب المعتادة.من المحتمل أن الخلل نشأ أثناء تعديل طريقة حساب قيمة الاستبدال، خاصة مع اختلاف إعفاءات الضرائب بين طرازات 2024 و2025، إضافةً إلى الفروقات في الميزات بين الإصدارات مثل Foundation Series.هذا الخلل ألقى الضوء على تعقيد تقييم Cybertruck مقارنة بالطرازات الجديدة المعروضة بشروط تمويل 0%.المصدر #تيسلا #تقدم #عروض #استبدال #cybertruck
    تيسلا تقدم عروض استبدال Cybertruck بأكثر من سعر الشراء بسبب خلل مفاجئ
    www.unlimit-tech.com
    خلال عطلة نهاية الأسبوع، بدأت تيسلا بعرض قيم استبدال Cybertruck أعلى من سعر الشراء الأصلي، ويبدو أن السبب خلل في نظامها.تقدم الشركة عادةً تقديرات استبدال عبر الإنترنت للأشخاص الراغبين بشراء سيارات جديدة، لكن بعض مالكي Cybertruck فوجئوا بأن التقديرات كانت مرتفعة بشكل غير منطقي، أحيانًا تتجاوز ما دفعوه أصلاً.على سبيل المثال، قدمت تيسلا 79,200 دولار لسيارة 2025 AWD مع 18,000 ميل، و118,800 دولار لنسخة 2024 “Cyberbeast” ثلاثية المحركات، وحتى عرض 11,000 دولار أكثر من السعر الأصلي لمالك إحدى السيارات 2024.تسببت هذه التقديرات في تكهنات بأن النظام قد يكون معطلاً أو أن الشركة تستدعي بعض الطرازات المبكرة، لكن تيسلا أكدت أن السبب هو خلل في النظام ولن تُعتمد هذه الأسعار، مع استرداد رسوم الطلب المعتادة.من المحتمل أن الخلل نشأ أثناء تعديل طريقة حساب قيمة الاستبدال، خاصة مع اختلاف إعفاءات الضرائب بين طرازات 2024 و2025، إضافةً إلى الفروقات في الميزات بين الإصدارات مثل Foundation Series.هذا الخلل ألقى الضوء على تعقيد تقييم Cybertruck مقارنة بالطرازات الجديدة المعروضة بشروط تمويل 0%.المصدر
    Like
    Love
    Wow
    Sad
    Angry
    724
    · 2 Commentaires ·0 Parts
  • يا جماعة، نحب نشارك معاكم حاجة جديدة في عالم الـ AI!

    اليوم، تكلمنا عن مقال مهم حول كيفية اختيار النماذج الأساسية لـ Generative AI. مع اتساع الخيارات المتاحة، احنا كمنظمات نواجه تحديات كبيرة في الاختيار. المقال هذا يقدم لنا منهجية تقييم شاملة مستخدمين Amazon Bedrock، لنجمعو بين النظريات والتطبيقات العملية. هذا الشي راح يعاوننا كبيانات علماء ومهندسين ML لنختار النموذج الأنسب لاحتياجاتنا.

    شخصياً، من تجربتي، كان عندي صعوبات في اختيار النماذج قبل ما أتعرف على الطرق الصحيحة. الذكاء الاصطناعي عالم زاخر بالفرص، ولازم نكونو جاهزين نستغلوها أحسن استغلال.

    الذكاء الاصطناعي قدامنا، ويحب منك شوية تفكير في اختياراتنا!

    https://aws.amazon.com/blogs/machine-learning/beyond-the-basics-a-comprehensive-foundation-model-selection-framework-for-generative-ai/

    #ذكاء_اصطناعي #AI #MachineLearning #AmazonBed
    🌟 يا جماعة، نحب نشارك معاكم حاجة جديدة في عالم الـ AI! 🌟 اليوم، تكلمنا عن مقال مهم حول كيفية اختيار النماذج الأساسية لـ Generative AI. مع اتساع الخيارات المتاحة، احنا كمنظمات نواجه تحديات كبيرة في الاختيار. المقال هذا يقدم لنا منهجية تقييم شاملة مستخدمين Amazon Bedrock، لنجمعو بين النظريات والتطبيقات العملية. هذا الشي راح يعاوننا كبيانات علماء ومهندسين ML لنختار النموذج الأنسب لاحتياجاتنا. شخصياً، من تجربتي، كان عندي صعوبات في اختيار النماذج قبل ما أتعرف على الطرق الصحيحة. الذكاء الاصطناعي عالم زاخر بالفرص، ولازم نكونو جاهزين نستغلوها أحسن استغلال. الذكاء الاصطناعي قدامنا، ويحب منك شوية تفكير في اختياراتنا! https://aws.amazon.com/blogs/machine-learning/beyond-the-basics-a-comprehensive-foundation-model-selection-framework-for-generative-ai/ #ذكاء_اصطناعي #AI #MachineLearning #AmazonBed
    aws.amazon.com
    As the model landscape expands, organizations face complex scenarios when selecting the right foundation model for their applications. In this blog post we present a systematic evaluation methodology for Amazon Bedrock users, combining theoretical fr
    Like
    Love
    Wow
    Angry
    Sad
    226
    · 1 Commentaires ·0 Parts
  • RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer

    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs.
    At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku.
    Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing.
    More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe.
    The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects.
    Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities.
    Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade.
    Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure.
    FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry.
    Innovations pioneered on FugakuNEXT could become blueprints for the world.
    What’s Inside
    FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads.
    It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture.
    The system will be built for speed, scale and efficiency.
    What It Will Do
    FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation.

    Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks.
    Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before.
    Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more.

    RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization.
    FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future.
    Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide.
    It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership.
    Image courtesy of RIKEN
    #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    blogs.nvidia.com
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT (Ministry of Education, Culture, Sports, Science and Technology), it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN
    2 Commentaires ·0 Parts
  • Payments in the Americas

    The Americas, led by the United States, Canada, and Brazil, now account for more than billion USD in annual video game revenue. This is one of the most valuable and competitive regions in global gaming, where success depends not just on great content, but on delivering seamless, localized checkout experiences. As players across North and South America demand more control, flexibility, and speed when making purchases, the payment methods developers offer can directly impact revenue, retention, and market expansion.Meeting gamers wherever they want to pay is no longer optional.United States: Faster and installment options gain steamIn the U.S., traditional credit and debit card dominance is waning. Players are adopting faster, bank-linked payment options and installment-based methods that offer both security and flexibility.Pay by Bank has rapidly grown in popularity, especially among mobile and younger users who prioritize speed and security. As of early 2025, nearly 9 million Americans use Pay by Bank each month. With major retailers backing the method, transaction volume is expected to surpass billion this year.In parallel, Affirm has emerged as a top Buy Now, Pay Laterprovider in the U.S. and beyond. Affirm has processed over billion in transactions over the past five years and now serves nearly 17 million active users. By enabling purchases in manageable installments, BNPL increases accessibility and boosts average transaction size, especially for higher-value bundles, subscriptions, and digital add-ons.Canada: Flexibility drives adoptionThe Canadian market shares many consumer behaviors with its U.S. counterpart, but it has its own unique payment dynamics. Canadian gamers, particularly younger ones, are showing strong demand for installment-based options that give them more control over spending.Affirm’s footprint in Canada is expanding fast. As of February 2025, Affirm is integrated at checkout across more than 279,000 retailers. This early adoption wave shows how developers can gain an edge by offering localized, flexible payments tuned to consumer expectations.Brazil: Mobile-first, subscription-readyBrazil stands out as one of the most mobile-driven gaming economies globally. Recurring payments and digital wallets are the default here, not the exception.Mercado Pago is one of the most widely adopted payment platforms in Brazil. As of Q1 2025, it reported 64 million monthly active users—a 31% year-over-year increase. For game developers, this platform isn’t just another option—it’s critical infrastructure. Its recurring billing features make it especially well-suited to live service games, battle passes, and subscription models.By adding support for Mercado Pago, Xsolla helps developers enable long-term monetization and retention while removing friction for a massive mobile audience that expects fast, familiar, and reliable payment flows.One infrastructure, multiple marketsThese regional payment trends aren’t just interesting; they’re actionable. Developers who integrate local methods can significantly increase conversion rates, reduce purchase drop-offs, and build trust in highly competitive markets.Platforms like Xsolla Pay Station now provide integrated support for these local options: Pay by Bank, coming soon in the U.S., Affirm in both the U.S. andCanada, and recurring billing via Mercado Pago in Brazil. With a single implementation, developers can reach more players using payment tools they already trust.Why it mattersThe stakes are high. Without support for local payment methods, developers risk underperforming in key markets. A generic global checkout can’t match the expectations of users who are used to specific transaction styles, like fast confirmation via Pay by Bank or the flexibility of BNPL.And when users don’t see familiar, trusted options, they abandon their carts. That’s not just a missed opportunity, it’s a loss of lifetime value. Retention and loyalty begin at the first purchase. Providing secure, localized checkout experiences lays the foundation for long-term engagement.What developers should do nowTo grow in the Americas, it’s no longer enough to simply localize game content. Payment localization is equally vital. Developers expanding into the U.S., Canada, or Brazil should evaluate whether their current checkout options match how players in those countries actually prefer to pay.Supporting Pay by Bank and Affirm in the U.S. opens the door to millions of gamers who want speed and flexibility. Adding Affirm in Canada addresses a growing demand among younger users. Enabling recurring billing through Mercado Pago in Brazil unlocks subscription revenue in a market that’s mobile-first by default.As the competitive landscape shifts, aligning payment infrastructure with regional preferences isn’t just smart, it’s essential. Game developers who do it well will not only unlock more revenue but also build stronger, more loyal player communities in the process.Read the original article here
    #payments #americas
    Payments in the Americas
    The Americas, led by the United States, Canada, and Brazil, now account for more than billion USD in annual video game revenue. This is one of the most valuable and competitive regions in global gaming, where success depends not just on great content, but on delivering seamless, localized checkout experiences. As players across North and South America demand more control, flexibility, and speed when making purchases, the payment methods developers offer can directly impact revenue, retention, and market expansion.Meeting gamers wherever they want to pay is no longer optional.United States: Faster and installment options gain steamIn the U.S., traditional credit and debit card dominance is waning. Players are adopting faster, bank-linked payment options and installment-based methods that offer both security and flexibility.Pay by Bank has rapidly grown in popularity, especially among mobile and younger users who prioritize speed and security. As of early 2025, nearly 9 million Americans use Pay by Bank each month. With major retailers backing the method, transaction volume is expected to surpass billion this year.In parallel, Affirm has emerged as a top Buy Now, Pay Laterprovider in the U.S. and beyond. Affirm has processed over billion in transactions over the past five years and now serves nearly 17 million active users. By enabling purchases in manageable installments, BNPL increases accessibility and boosts average transaction size, especially for higher-value bundles, subscriptions, and digital add-ons.Canada: Flexibility drives adoptionThe Canadian market shares many consumer behaviors with its U.S. counterpart, but it has its own unique payment dynamics. Canadian gamers, particularly younger ones, are showing strong demand for installment-based options that give them more control over spending.Affirm’s footprint in Canada is expanding fast. As of February 2025, Affirm is integrated at checkout across more than 279,000 retailers. This early adoption wave shows how developers can gain an edge by offering localized, flexible payments tuned to consumer expectations.Brazil: Mobile-first, subscription-readyBrazil stands out as one of the most mobile-driven gaming economies globally. Recurring payments and digital wallets are the default here, not the exception.Mercado Pago is one of the most widely adopted payment platforms in Brazil. As of Q1 2025, it reported 64 million monthly active users—a 31% year-over-year increase. For game developers, this platform isn’t just another option—it’s critical infrastructure. Its recurring billing features make it especially well-suited to live service games, battle passes, and subscription models.By adding support for Mercado Pago, Xsolla helps developers enable long-term monetization and retention while removing friction for a massive mobile audience that expects fast, familiar, and reliable payment flows.One infrastructure, multiple marketsThese regional payment trends aren’t just interesting; they’re actionable. Developers who integrate local methods can significantly increase conversion rates, reduce purchase drop-offs, and build trust in highly competitive markets.Platforms like Xsolla Pay Station now provide integrated support for these local options: Pay by Bank, coming soon in the U.S., Affirm in both the U.S. andCanada, and recurring billing via Mercado Pago in Brazil. With a single implementation, developers can reach more players using payment tools they already trust.Why it mattersThe stakes are high. Without support for local payment methods, developers risk underperforming in key markets. A generic global checkout can’t match the expectations of users who are used to specific transaction styles, like fast confirmation via Pay by Bank or the flexibility of BNPL.And when users don’t see familiar, trusted options, they abandon their carts. That’s not just a missed opportunity, it’s a loss of lifetime value. Retention and loyalty begin at the first purchase. Providing secure, localized checkout experiences lays the foundation for long-term engagement.What developers should do nowTo grow in the Americas, it’s no longer enough to simply localize game content. Payment localization is equally vital. Developers expanding into the U.S., Canada, or Brazil should evaluate whether their current checkout options match how players in those countries actually prefer to pay.Supporting Pay by Bank and Affirm in the U.S. opens the door to millions of gamers who want speed and flexibility. Adding Affirm in Canada addresses a growing demand among younger users. Enabling recurring billing through Mercado Pago in Brazil unlocks subscription revenue in a market that’s mobile-first by default.As the competitive landscape shifts, aligning payment infrastructure with regional preferences isn’t just smart, it’s essential. Game developers who do it well will not only unlock more revenue but also build stronger, more loyal player communities in the process.Read the original article here #payments #americas
    Payments in the Americas
    80.lv
    The Americas, led by the United States, Canada, and Brazil, now account for more than $100 billion USD in annual video game revenue. This is one of the most valuable and competitive regions in global gaming, where success depends not just on great content, but on delivering seamless, localized checkout experiences. As players across North and South America demand more control, flexibility, and speed when making purchases, the payment methods developers offer can directly impact revenue, retention, and market expansion.Meeting gamers wherever they want to pay is no longer optional.United States: Faster and installment options gain steamIn the U.S., traditional credit and debit card dominance is waning. Players are adopting faster, bank-linked payment options and installment-based methods that offer both security and flexibility.Pay by Bank has rapidly grown in popularity, especially among mobile and younger users who prioritize speed and security. As of early 2025, nearly 9 million Americans use Pay by Bank each month. With major retailers backing the method, transaction volume is expected to surpass $100 billion this year.In parallel, Affirm has emerged as a top Buy Now, Pay Later (BNPL) provider in the U.S. and beyond. Affirm has processed over $75 billion in transactions over the past five years and now serves nearly 17 million active users. By enabling purchases in manageable installments, BNPL increases accessibility and boosts average transaction size, especially for higher-value bundles, subscriptions, and digital add-ons.Canada: Flexibility drives adoptionThe Canadian market shares many consumer behaviors with its U.S. counterpart, but it has its own unique payment dynamics. Canadian gamers, particularly younger ones, are showing strong demand for installment-based options that give them more control over spending.Affirm’s footprint in Canada is expanding fast. As of February 2025, Affirm is integrated at checkout across more than 279,000 retailers. This early adoption wave shows how developers can gain an edge by offering localized, flexible payments tuned to consumer expectations.Brazil: Mobile-first, subscription-readyBrazil stands out as one of the most mobile-driven gaming economies globally. Recurring payments and digital wallets are the default here, not the exception.Mercado Pago is one of the most widely adopted payment platforms in Brazil. As of Q1 2025, it reported 64 million monthly active users—a 31% year-over-year increase. For game developers, this platform isn’t just another option—it’s critical infrastructure. Its recurring billing features make it especially well-suited to live service games, battle passes, and subscription models.By adding support for Mercado Pago, Xsolla helps developers enable long-term monetization and retention while removing friction for a massive mobile audience that expects fast, familiar, and reliable payment flows.One infrastructure, multiple marketsThese regional payment trends aren’t just interesting; they’re actionable. Developers who integrate local methods can significantly increase conversion rates, reduce purchase drop-offs, and build trust in highly competitive markets.Platforms like Xsolla Pay Station now provide integrated support for these local options: Pay by Bank, coming soon in the U.S., Affirm in both the U.S. and (soon) Canada, and recurring billing via Mercado Pago in Brazil. With a single implementation, developers can reach more players using payment tools they already trust.Why it mattersThe stakes are high. Without support for local payment methods, developers risk underperforming in key markets. A generic global checkout can’t match the expectations of users who are used to specific transaction styles, like fast confirmation via Pay by Bank or the flexibility of BNPL.And when users don’t see familiar, trusted options, they abandon their carts. That’s not just a missed opportunity, it’s a loss of lifetime value. Retention and loyalty begin at the first purchase. Providing secure, localized checkout experiences lays the foundation for long-term engagement.What developers should do nowTo grow in the Americas, it’s no longer enough to simply localize game content. Payment localization is equally vital. Developers expanding into the U.S., Canada, or Brazil should evaluate whether their current checkout options match how players in those countries actually prefer to pay.Supporting Pay by Bank and Affirm in the U.S. opens the door to millions of gamers who want speed and flexibility. Adding Affirm in Canada addresses a growing demand among younger users. Enabling recurring billing through Mercado Pago in Brazil unlocks subscription revenue in a market that’s mobile-first by default.As the competitive landscape shifts, aligning payment infrastructure with regional preferences isn’t just smart, it’s essential. Game developers who do it well will not only unlock more revenue but also build stronger, more loyal player communities in the process.Read the original article here
    2 Commentaires ·0 Parts
  • Gearing Up for the Gigawatt Data Center Age

    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.
    Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game.
    This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction.
    The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.
    With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.
    The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed.
    This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack.
    The Data Center Is the Computer

    Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.
    These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”.
    These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training.
    For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.
    Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations.
    Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.
    With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years.
    For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.
    But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI.
    Spectrum‑X Ethernet: Bringing AI to the Enterprise

    Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale.
    Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management.
    Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions.
    A Portfolio for Scale‑Up and Scale‑Out
    No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon.
    NVLink: Scale Up Inside the Rack
    Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU.
    Photonics: The Next Leap

    To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories.

    Delivering on the Promise of Open Standards

    Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems.

    Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top.

    Toward Million‑GPU AI Factories
    AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure.
    The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
     
     

     
    #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”. These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.       #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    blogs.nvidia.com
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language models (LLMs) behind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce” (which combines data from all nodes and redistributes the result) and “all-to-all” (where each node exchanges data with every other node). These processes are susceptible to the speed and responsiveness of the network — what engineers call latency (delay) and bandwidth (data capacity) — causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernet (RoCE) specifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.      
    2 Commentaires ·0 Parts
  • هيا يا جماعة، عندي خبر رهيب لكل وكالات العقارات!

    شركة Lone Wolf Technologies راهي أطلقت "Foundation Dashboard" الجديد اللي رح يغير قواعد اللعبة في عالم الـ Business Intelligence. هذا الـ dashboard الجديد يقدملكم insights فورية، يساعدكم في تتبع الأهداف، ويعطيكم صورة شاملة عن أداء البزنس من البداية حتى النهاية. كيفاش ما نقولش "وااااو" للمزايا هادي؟

    من تجربتي، الأداوات الرقمية دايما كانت تفتح لي أبواب جديدة، وكي اندمجت مع التكنولوجيات الجديدة، وليت نحقق أهدافي بطريقة أسرع.

    خليكم ديما على علم بالتحديثات الجديدة، سيّبوا الفرصة للتكنولوجيا تنيرلكم الطريق!

    https://www.globenewswire.com/news-release/2025/08/21/3137205/0/en/Lone-Wolf-Technologies-Launches-Foundation-Dashboard-Transforming-Business-Intelligence-for-Real-Estate-Agents.html

    #تكنولوجيا #عقارات #BusinessIntelligence #Innovation #LoneWolf
    🌟 هيا يا جماعة، عندي خبر رهيب لكل وكالات العقارات! 🌟 شركة Lone Wolf Technologies راهي أطلقت "Foundation Dashboard" الجديد اللي رح يغير قواعد اللعبة في عالم الـ Business Intelligence. هذا الـ dashboard الجديد يقدملكم insights فورية، يساعدكم في تتبع الأهداف، ويعطيكم صورة شاملة عن أداء البزنس من البداية حتى النهاية. كيفاش ما نقولش "وااااو" للمزايا هادي؟ من تجربتي، الأداوات الرقمية دايما كانت تفتح لي أبواب جديدة، وكي اندمجت مع التكنولوجيات الجديدة، وليت نحقق أهدافي بطريقة أسرع. خليكم ديما على علم بالتحديثات الجديدة، سيّبوا الفرصة للتكنولوجيا تنيرلكم الطريق! https://www.globenewswire.com/news-release/2025/08/21/3137205/0/en/Lone-Wolf-Technologies-Launches-Foundation-Dashboard-Transforming-Business-Intelligence-for-Real-Estate-Agents.html #تكنولوجيا #عقارات #BusinessIntelligence #Innovation #LoneWolf
    www.globenewswire.com
    New homepage experience delivers real-time insights, goal tracking, and a unified view of business performance from lead to close. New homepage experience delivers real-time insights, goal tracking, and a unified view of business performance from lea
    1 Commentaires ·0 Parts
  • New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs

    At Gamescom, NVIDIA is releasing its first major update to Project G‑Assist — an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands.
    The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features.
    NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects.
    In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article.
    G-Assist Gets Smarter, Expands to More RTX PCs
    The modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more.
    Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to:

    Run diagnostics to optimize game performance
    Display or chart frame rates, latency and GPU temperatures
    Adjust GPU or even peripheral settings, such as keyboard lighting

    The G-Assist update also introduces a new, significantly more efficient AI model that’s faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops.
    Getting started is simple: install the NVIDIA app and the latest Game Ready Driver on Aug. 19, download the G-Assist update from the app’s home screen and press Alt+G to activate.
    Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS.
    Introducing the G-Assist Plug-In Hub With Mod.io
    NVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones.
    With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins.
    With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in.
    The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Here’s a sneak peek of what they came up with:

    Some finalists include:

    Omniplay — allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gaming
    Launchpad — lets gamers set, launch and toggle custom app groups on the fly to boost productivity
    Flux NIM Microservice for G-Assist — allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservices

    The winners of the hackathon will be announced on Wednesday, Aug. 20.
    Building custom plug-ins is simple. They’re based on a foundation of JSON and Python scripts — and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language.
    Mod It Like It’s Hot With RTX Remix 
    Classic PC games remain beloved for their unforgettable stories, characters and gameplay — but their dated graphics can be a barrier for new and longtime players.
    NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies — bridging nostalgic gameplay with modern visuals.
    Since the platform’s release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex — and have amassed over 2 million downloads.

    In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win in cash prizes. At Gamescom, NVIDIA announced the winners:

    Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams
    Best Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_Adams

    Runner-Up: Vampire: The Masquerade – Bloodlines – RTX Remaster, by Safemilk

    Most Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams

    Runner-Up: I-Ninja Remixed, by g.i.george333

    Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159

    These modders tapped RTX Remix and generative AI to bring their creations to life — from enhancing textures to quickly creating images and 3D assets.
    For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them.
    The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the game’s gothic horror atmosphere has never felt more immersive to play through.
    All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server.
    Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time.
    To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #new #lightweight #model #project #gassist
    New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs
    At Gamescom, NVIDIA is releasing its first major update to Project G‑Assist — an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands. The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features. NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects. In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article. G-Assist Gets Smarter, Expands to More RTX PCs The modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more. Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to: Run diagnostics to optimize game performance Display or chart frame rates, latency and GPU temperatures Adjust GPU or even peripheral settings, such as keyboard lighting The G-Assist update also introduces a new, significantly more efficient AI model that’s faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops. Getting started is simple: install the NVIDIA app and the latest Game Ready Driver on Aug. 19, download the G-Assist update from the app’s home screen and press Alt+G to activate. Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS. Introducing the G-Assist Plug-In Hub With Mod.io NVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones. With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins. With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in. The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Here’s a sneak peek of what they came up with: Some finalists include: Omniplay — allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gaming Launchpad — lets gamers set, launch and toggle custom app groups on the fly to boost productivity Flux NIM Microservice for G-Assist — allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservices The winners of the hackathon will be announced on Wednesday, Aug. 20. Building custom plug-ins is simple. They’re based on a foundation of JSON and Python scripts — and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language. Mod It Like It’s Hot With RTX Remix  Classic PC games remain beloved for their unforgettable stories, characters and gameplay — but their dated graphics can be a barrier for new and longtime players. NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies — bridging nostalgic gameplay with modern visuals. Since the platform’s release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex — and have amassed over 2 million downloads. In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win in cash prizes. At Gamescom, NVIDIA announced the winners: Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Best Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: Vampire: The Masquerade – Bloodlines – RTX Remaster, by Safemilk Most Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: I-Ninja Remixed, by g.i.george333 Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159 These modders tapped RTX Remix and generative AI to bring their creations to life — from enhancing textures to quickly creating images and 3D assets. For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them. The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the game’s gothic horror atmosphere has never felt more immersive to play through. All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server. Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time. To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #new #lightweight #model #project #gassist
    New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs
    blogs.nvidia.com
    At Gamescom, NVIDIA is releasing its first major update to Project G‑Assist — an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands. The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features. NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects. In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article. G-Assist Gets Smarter, Expands to More RTX PCs The modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more. Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to: Run diagnostics to optimize game performance Display or chart frame rates, latency and GPU temperatures Adjust GPU or even peripheral settings, such as keyboard lighting The G-Assist update also introduces a new, significantly more efficient AI model that’s faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops. Getting started is simple: install the NVIDIA app and the latest Game Ready Driver on Aug. 19, download the G-Assist update from the app’s home screen and press Alt+G to activate. Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS. Introducing the G-Assist Plug-In Hub With Mod.io NVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones. With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins. With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in. The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Here’s a sneak peek of what they came up with: Some finalists include: Omniplay — allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gaming Launchpad — lets gamers set, launch and toggle custom app groups on the fly to boost productivity Flux NIM Microservice for G-Assist — allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservices The winners of the hackathon will be announced on Wednesday, Aug. 20. Building custom plug-ins is simple. They’re based on a foundation of JSON and Python scripts — and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language. Mod It Like It’s Hot With RTX Remix  Classic PC games remain beloved for their unforgettable stories, characters and gameplay — but their dated graphics can be a barrier for new and longtime players. NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies — bridging nostalgic gameplay with modern visuals. Since the platform’s release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex — and have amassed over 2 million downloads. In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win $50,000 in cash prizes. At Gamescom, NVIDIA announced the winners: Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Best Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: Vampire: The Masquerade – Bloodlines – RTX Remaster, by Safemilk Most Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: I-Ninja Remixed, by g.i.george333 Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159 These modders tapped RTX Remix and generative AI to bring their creations to life — from enhancing textures to quickly creating images and 3D assets. For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them. The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the game’s gothic horror atmosphere has never felt more immersive to play through. All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server. Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time. To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    2 Commentaires ·0 Parts
  • Magic: The Gathering May Have to Bite The Bullet It's Long Dodged in November 2025

    Magic: The Gathering has undergone a lot of changes in the last year, starting with the addition of a core set in semi-perpetuity for Standard with Foundations, and then making changes to the legality of Universes Beyond sets. Since UB sets became Standard-legal, this has caused a shift in terms of release schedule, where now there have already been four sets this year with two more on the way, and 2026 looks to keep the same approach based on WotC's comments. MTG's Final Fantasy set sold million worth of products in a single day, which shows that these UB sets are here to stay. However, not all their cards should get the same treatment.
    #magic #gathering #have #bite #bullet
    Magic: The Gathering May Have to Bite The Bullet It's Long Dodged in November 2025
    Magic: The Gathering has undergone a lot of changes in the last year, starting with the addition of a core set in semi-perpetuity for Standard with Foundations, and then making changes to the legality of Universes Beyond sets. Since UB sets became Standard-legal, this has caused a shift in terms of release schedule, where now there have already been four sets this year with two more on the way, and 2026 looks to keep the same approach based on WotC's comments. MTG's Final Fantasy set sold million worth of products in a single day, which shows that these UB sets are here to stay. However, not all their cards should get the same treatment. #magic #gathering #have #bite #bullet
    Magic: The Gathering May Have to Bite The Bullet It's Long Dodged in November 2025
    gamerant.com
    Magic: The Gathering has undergone a lot of changes in the last year, starting with the addition of a core set in semi-perpetuity for Standard with Foundations, and then making changes to the legality of Universes Beyond sets. Since UB sets became Standard-legal, this has caused a shift in terms of release schedule, where now there have already been four sets this year with two more on the way, and 2026 looks to keep the same approach based on WotC's comments. MTG's Final Fantasy set sold $200 million worth of products in a single day, which shows that these UB sets are here to stay. However, not all their cards should get the same treatment.
    2 Commentaires ·0 Parts
  • Embracer will deploy 'targeted cost initiatives' and AI tech to unlock more value

    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 14, 20253 Min ReadLogo via Embracer Group / Kingdom Come Deliverance screenshot via Warhorse StudiosEmbracer Group—which is in the process of splitting into three standalone companies following an era of mass layoffs, project cancellations, and divestments—has confirmed it will explore "targeted cost initiatives" and look to streamline processes with the help of AI technology during what CEO Phil Rogers described as a "transition year" for the Swedish conglomerate.Addressing investors in the company's latest fiscal report, Rogers said Embracer's performance during the first quarter of the current financial year was "quiet" and said the company must now focus on "operational and strategic execution" to position itself for long-term growth.Consolidated net sales decreased by 31 percent to SEK 3,355 millionduring Q1. Breaking that total down by operating segment, PC/Console Games decreased by 38 percent to SEK 1,641 million; Mobile Games decreased by 63 percent to SEK 520 million; and Entertainment & Services increased by 41 percent to SEK 1,193 million."As we move forward, we are taking a conservative approach for this current year, reflecting a measured view on the timing and performance of our PC/Console release schedule in addition to potential continued softness in our catalog following Q1," said Rogers, who officially stepped up as CEO on August 1, 2025, to allow outgoing chief exec Lars Wingefors to take on the mantle of executive chair. Related:"This year is a transition period as we lay the foundations of Fellowship Entertainment and focus on building a business led by key IP and empowered teams, in a structure enabling focus and operational discipline. It is paramount that we concentrate on the quality and long-term value of our releases rather than chasing short-term gains."What does that mean for Embracer employees? According to Rogers, the company will implement "targeted cost initiatives" relating to underperforming business. Those initiatives could potentially result in more divestments. Game Developer has reached out to Embracer to clarify whether those plans could potentially include layoffs.Embracer CEO believes AI will become an "increasingly supportive force"Rogers claims Embracer is facing a "pivotal moment" and must double down on its biggest franchises. He explained the company has increased capital allocation to its core IPs, which include The Lord of the Rings, Tomb Raider, Kingdom Come Deliverance, Metro, Dead Island, Darksiders, and Remnant. He believes those franchises represent "one of the most exciting IP portfolios in the industry" but said Embracer must now "sharpen" its focus. The company currently has nine triple-A titles slated for release, excluding projects being financed by external partners. Related:"As previously noted, one or a couple of these games will most likely slip into FY 2028/29, but we do see a clear increase in release cadence as compared to our average of just over 1 AAA game per year in the past five years," said Rogers, discussing that release slate. "We expect the increased released pipeline in combination with lower fixed costs will notably improve free cashflow FY 2026/27 onwards."As Embracer prepares to evolve into Fellowship Entertainment, Rogers said the company must significantly rewire its business to create a "powerhouse unit" within its PC and console division. According to Rogers, leveraging AI technologies will be  integral part of that process. His predecessor had already suggested that ignoring AI tools could lead to it being "outrun" by its competitors. "This comes through smarter collaboration, increased streamlining, shared services and with AI as an increasingly supportive force," Rogers continued. "These factors will be key to unlocking value and expanding margins." As the table below shows, Embracer has already significantly reduced its workforce following a number of layoffs and key divestments. Related:Its entire workforce totaled 7,228 peopleas of June 2025. That a notable decrease on the 13,712 workersit employed at the end of June 2024. The company currently has 116 video games in development—down on the 127 projects it had in the pipeline this time last year, but actually up on the 108 titles it showcased in March. about:Embracer GroupGenerative AI, Machine Learning, & LLMsTop StoriesAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    #embracer #will #deploy #039targeted #cost
    Embracer will deploy 'targeted cost initiatives' and AI tech to unlock more value
    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 14, 20253 Min ReadLogo via Embracer Group / Kingdom Come Deliverance screenshot via Warhorse StudiosEmbracer Group—which is in the process of splitting into three standalone companies following an era of mass layoffs, project cancellations, and divestments—has confirmed it will explore "targeted cost initiatives" and look to streamline processes with the help of AI technology during what CEO Phil Rogers described as a "transition year" for the Swedish conglomerate.Addressing investors in the company's latest fiscal report, Rogers said Embracer's performance during the first quarter of the current financial year was "quiet" and said the company must now focus on "operational and strategic execution" to position itself for long-term growth.Consolidated net sales decreased by 31 percent to SEK 3,355 millionduring Q1. Breaking that total down by operating segment, PC/Console Games decreased by 38 percent to SEK 1,641 million; Mobile Games decreased by 63 percent to SEK 520 million; and Entertainment & Services increased by 41 percent to SEK 1,193 million."As we move forward, we are taking a conservative approach for this current year, reflecting a measured view on the timing and performance of our PC/Console release schedule in addition to potential continued softness in our catalog following Q1," said Rogers, who officially stepped up as CEO on August 1, 2025, to allow outgoing chief exec Lars Wingefors to take on the mantle of executive chair. Related:"This year is a transition period as we lay the foundations of Fellowship Entertainment and focus on building a business led by key IP and empowered teams, in a structure enabling focus and operational discipline. It is paramount that we concentrate on the quality and long-term value of our releases rather than chasing short-term gains."What does that mean for Embracer employees? According to Rogers, the company will implement "targeted cost initiatives" relating to underperforming business. Those initiatives could potentially result in more divestments. Game Developer has reached out to Embracer to clarify whether those plans could potentially include layoffs.Embracer CEO believes AI will become an "increasingly supportive force"Rogers claims Embracer is facing a "pivotal moment" and must double down on its biggest franchises. He explained the company has increased capital allocation to its core IPs, which include The Lord of the Rings, Tomb Raider, Kingdom Come Deliverance, Metro, Dead Island, Darksiders, and Remnant. He believes those franchises represent "one of the most exciting IP portfolios in the industry" but said Embracer must now "sharpen" its focus. The company currently has nine triple-A titles slated for release, excluding projects being financed by external partners. Related:"As previously noted, one or a couple of these games will most likely slip into FY 2028/29, but we do see a clear increase in release cadence as compared to our average of just over 1 AAA game per year in the past five years," said Rogers, discussing that release slate. "We expect the increased released pipeline in combination with lower fixed costs will notably improve free cashflow FY 2026/27 onwards."As Embracer prepares to evolve into Fellowship Entertainment, Rogers said the company must significantly rewire its business to create a "powerhouse unit" within its PC and console division. According to Rogers, leveraging AI technologies will be  integral part of that process. His predecessor had already suggested that ignoring AI tools could lead to it being "outrun" by its competitors. "This comes through smarter collaboration, increased streamlining, shared services and with AI as an increasingly supportive force," Rogers continued. "These factors will be key to unlocking value and expanding margins." As the table below shows, Embracer has already significantly reduced its workforce following a number of layoffs and key divestments. Related:Its entire workforce totaled 7,228 peopleas of June 2025. That a notable decrease on the 13,712 workersit employed at the end of June 2024. The company currently has 116 video games in development—down on the 127 projects it had in the pipeline this time last year, but actually up on the 108 titles it showcased in March. about:Embracer GroupGenerative AI, Machine Learning, & LLMsTop StoriesAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like #embracer #will #deploy #039targeted #cost
    Embracer will deploy 'targeted cost initiatives' and AI tech to unlock more value
    www.gamedeveloper.com
    Chris Kerr, Senior Editor, News, GameDeveloper.comAugust 14, 20253 Min ReadLogo via Embracer Group / Kingdom Come Deliverance screenshot via Warhorse StudiosEmbracer Group—which is in the process of splitting into three standalone companies following an era of mass layoffs, project cancellations, and divestments—has confirmed it will explore "targeted cost initiatives" and look to streamline processes with the help of AI technology during what CEO Phil Rogers described as a "transition year" for the Swedish conglomerate.Addressing investors in the company's latest fiscal report, Rogers said Embracer's performance during the first quarter of the current financial year was "quiet" and said the company must now focus on "operational and strategic execution" to position itself for long-term growth.Consolidated net sales decreased by 31 percent to SEK 3,355 million ($350.5 million) during Q1. Breaking that total down by operating segment, PC/Console Games decreased by 38 percent to SEK 1,641 million; Mobile Games decreased by 63 percent to SEK 520 million; and Entertainment & Services increased by 41 percent to SEK 1,193 million."As we move forward, we are taking a conservative approach for this current year, reflecting a measured view on the timing and performance of our PC/Console release schedule in addition to potential continued softness in our catalog following Q1," said Rogers, who officially stepped up as CEO on August 1, 2025, to allow outgoing chief exec Lars Wingefors to take on the mantle of executive chair. Related:"This year is a transition period as we lay the foundations of Fellowship Entertainment and focus on building a business led by key IP and empowered teams, in a structure enabling focus and operational discipline. It is paramount that we concentrate on the quality and long-term value of our releases rather than chasing short-term gains."What does that mean for Embracer employees? According to Rogers, the company will implement "targeted cost initiatives" relating to underperforming business. Those initiatives could potentially result in more divestments. Game Developer has reached out to Embracer to clarify whether those plans could potentially include layoffs.Embracer CEO believes AI will become an "increasingly supportive force"Rogers claims Embracer is facing a "pivotal moment" and must double down on its biggest franchises. He explained the company has increased capital allocation to its core IPs, which include The Lord of the Rings, Tomb Raider, Kingdom Come Deliverance, Metro, Dead Island, Darksiders, and Remnant. He believes those franchises represent "one of the most exciting IP portfolios in the industry" but said Embracer must now "sharpen" its focus. The company currently has nine triple-A titles slated for release, excluding projects being financed by external partners. Related:"As previously noted, one or a couple of these games will most likely slip into FY 2028/29, but we do see a clear increase in release cadence as compared to our average of just over 1 AAA game per year in the past five years," said Rogers, discussing that release slate. "We expect the increased released pipeline in combination with lower fixed costs will notably improve free cashflow FY 2026/27 onwards."As Embracer prepares to evolve into Fellowship Entertainment, Rogers said the company must significantly rewire its business to create a "powerhouse unit" within its PC and console division. According to Rogers, leveraging AI technologies will be  integral part of that process. His predecessor had already suggested that ignoring AI tools could lead to it being "outrun" by its competitors. "This comes through smarter collaboration, increased streamlining, shared services and with AI as an increasingly supportive force," Rogers continued. "These factors will be key to unlocking value and expanding margins." As the table below shows, Embracer has already significantly reduced its workforce following a number of layoffs and key divestments. Related:Its entire workforce totaled 7,228 people (including 5,452 game developers) as of June 2025. That a notable decrease on the 13,712 workers (and 10,713 game developers) it employed at the end of June 2024. The company currently has 116 video games in development—down on the 127 projects it had in the pipeline this time last year, but actually up on the 108 titles it showcased in March.Read more about:Embracer GroupGenerative AI, Machine Learning, & LLMsTop StoriesAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    2 Commentaires ·0 Parts
  • Setting Up an Explorable Desert Environment With Emperia's Creator Tools Plug-in

    IntroductionHi, I'm Berkay Dobrucali, Co-Founder of JustBStudios. I also work full-time as the Unity Team Lead at Hivemind Studios. Since my last interview with 80 Level, we won a Unity Award in the Best Artistic Content category with our Stylized Nature pack on behalf of Hivemind. These days, we've been focusing on both Unreal and Unity environments, working on some exciting new projects. I'm also continuing to bridge the gap between platforms by converting Unreal content into Unity.
Becoming an Environment ArtistSince I studied interior architecture, I started modeling houses and furniture using 3ds Max during my early university years. After graduation, I worked professionally in the field for a while. Later on, I joined an asset creator studio, where I began learning Unity. Over time, I found myself becoming more interested in environment design, lighting, and optimization rather than just modelingThe biggest challenge was starting from scratch and having to learn so many different aspects at once. But as time passed, I improved steadily and began to develop a broader understanding across various areas of environment art.I initially started with 3ds Max and became quite proficient in it. Later on, I added Photoshop to my workflow. As I specialized in both Unreal and Unity, I also found myself using Photoshop and Premiere Pro occasionally. For tasks like texture editing and video editing when needed.In the beginning, I benefited a lot from Udemy and free YouTube courses. Watching a lot of tutorials and repeating the process over and over was key to improving my skills.Environment art has always fascinated me because it has the power to directly convey atmosphere to the player. As an interior architect, I'm especially drawn to how a space can contribute to storytelling and immerse the player in a narrative. I often find inspiration from modern games and artistic platforms like ArtStation. Additionally, abandoned real-world structures and nature serve as important sources of inspiration for me.
Art-to-ExperienceI was involved in the project as the Art Director. The project was completed by three people. One of my teammates, who is also my Co-Founder, Begüm Dobrucali, was responsible for modeling and texturing the assets in the pack.She first created high-poly meshes in Blender, then baked them down to low-poly versions. For texturing, she used Substance 3D Painter. Once the asset and texture creation were complete, we asked another team member, Sude Kömür, to create the level design using the assets, based on the references we gathered. We stayed in close communication throughout this process and managed to create a compelling level. Afterward, we moved on to lighting. Since it was a desert environment, we used warm-toned directional lighting to illuminate the scene.The main challenge was placing the level elements correctly and ensuring everything was scaled properly to human proportions. Otherwise, the pack would have looked inconsistent.Using Emperia's Creator Tools plug-in was a smooth and enjoyable experience for me. The plug-in came with clear and detailed guidelines, which allowed me to follow each step carefully and complete the necessary tasks with ease. This comprehensive documentation made the entire process much faster and more efficient on my end. Overall, thanks to its user-friendly interface and logical workflow, working with the plugin was straightforward and significantly simplified my work.Thoughts on the Digital Art IndustryOne of the major issues in the industry is how quickly content is consumed and how often the artist's effort goes unrecognized. Additionally, the uncontrolled use of AI-generated content poses a threat to many artists. I believe we need more respect and recognition for the craft and the people behind it.Advice for BeginnersBeginners should focus on building a solid foundation and maintaining patience throughout the learning process. They should pay special attention to asset creation, level design, and lighting. I recommend starting with free resources on YouTube, and once they reach a certain level, diving deeper into paid courses for more advanced learning.Additionally, working on small but achievable projects is very important for growth.JustBStudios TeamInterview conducted by Theodore McKenzie
    #setting #explorable #desert #environment #with
    Setting Up an Explorable Desert Environment With Emperia's Creator Tools Plug-in
    IntroductionHi, I'm Berkay Dobrucali, Co-Founder of JustBStudios. I also work full-time as the Unity Team Lead at Hivemind Studios. Since my last interview with 80 Level, we won a Unity Award in the Best Artistic Content category with our Stylized Nature pack on behalf of Hivemind. These days, we've been focusing on both Unreal and Unity environments, working on some exciting new projects. I'm also continuing to bridge the gap between platforms by converting Unreal content into Unity.
Becoming an Environment ArtistSince I studied interior architecture, I started modeling houses and furniture using 3ds Max during my early university years. After graduation, I worked professionally in the field for a while. Later on, I joined an asset creator studio, where I began learning Unity. Over time, I found myself becoming more interested in environment design, lighting, and optimization rather than just modelingThe biggest challenge was starting from scratch and having to learn so many different aspects at once. But as time passed, I improved steadily and began to develop a broader understanding across various areas of environment art.I initially started with 3ds Max and became quite proficient in it. Later on, I added Photoshop to my workflow. As I specialized in both Unreal and Unity, I also found myself using Photoshop and Premiere Pro occasionally. For tasks like texture editing and video editing when needed.In the beginning, I benefited a lot from Udemy and free YouTube courses. Watching a lot of tutorials and repeating the process over and over was key to improving my skills.Environment art has always fascinated me because it has the power to directly convey atmosphere to the player. As an interior architect, I'm especially drawn to how a space can contribute to storytelling and immerse the player in a narrative. I often find inspiration from modern games and artistic platforms like ArtStation. Additionally, abandoned real-world structures and nature serve as important sources of inspiration for me.
Art-to-ExperienceI was involved in the project as the Art Director. The project was completed by three people. One of my teammates, who is also my Co-Founder, Begüm Dobrucali, was responsible for modeling and texturing the assets in the pack.She first created high-poly meshes in Blender, then baked them down to low-poly versions. For texturing, she used Substance 3D Painter. Once the asset and texture creation were complete, we asked another team member, Sude Kömür, to create the level design using the assets, based on the references we gathered. We stayed in close communication throughout this process and managed to create a compelling level. Afterward, we moved on to lighting. Since it was a desert environment, we used warm-toned directional lighting to illuminate the scene.The main challenge was placing the level elements correctly and ensuring everything was scaled properly to human proportions. Otherwise, the pack would have looked inconsistent.Using Emperia's Creator Tools plug-in was a smooth and enjoyable experience for me. The plug-in came with clear and detailed guidelines, which allowed me to follow each step carefully and complete the necessary tasks with ease. This comprehensive documentation made the entire process much faster and more efficient on my end. Overall, thanks to its user-friendly interface and logical workflow, working with the plugin was straightforward and significantly simplified my work.Thoughts on the Digital Art IndustryOne of the major issues in the industry is how quickly content is consumed and how often the artist's effort goes unrecognized. Additionally, the uncontrolled use of AI-generated content poses a threat to many artists. I believe we need more respect and recognition for the craft and the people behind it.Advice for BeginnersBeginners should focus on building a solid foundation and maintaining patience throughout the learning process. They should pay special attention to asset creation, level design, and lighting. I recommend starting with free resources on YouTube, and once they reach a certain level, diving deeper into paid courses for more advanced learning.Additionally, working on small but achievable projects is very important for growth.JustBStudios TeamInterview conducted by Theodore McKenzie #setting #explorable #desert #environment #with
    Setting Up an Explorable Desert Environment With Emperia's Creator Tools Plug-in
    80.lv
    IntroductionHi, I'm Berkay Dobrucali, Co-Founder of JustBStudios. I also work full-time as the Unity Team Lead at Hivemind Studios. Since my last interview with 80 Level, we won a Unity Award in the Best Artistic Content category with our Stylized Nature pack on behalf of Hivemind. These days, we've been focusing on both Unreal and Unity environments, working on some exciting new projects. I'm also continuing to bridge the gap between platforms by converting Unreal content into Unity.
Becoming an Environment ArtistSince I studied interior architecture, I started modeling houses and furniture using 3ds Max during my early university years. After graduation, I worked professionally in the field for a while. Later on, I joined an asset creator studio, where I began learning Unity. Over time, I found myself becoming more interested in environment design, lighting, and optimization rather than just modelingThe biggest challenge was starting from scratch and having to learn so many different aspects at once. But as time passed, I improved steadily and began to develop a broader understanding across various areas of environment art.I initially started with 3ds Max and became quite proficient in it. Later on, I added Photoshop to my workflow. As I specialized in both Unreal and Unity, I also found myself using Photoshop and Premiere Pro occasionally. For tasks like texture editing and video editing when needed.In the beginning, I benefited a lot from Udemy and free YouTube courses. Watching a lot of tutorials and repeating the process over and over was key to improving my skills.Environment art has always fascinated me because it has the power to directly convey atmosphere to the player. As an interior architect, I'm especially drawn to how a space can contribute to storytelling and immerse the player in a narrative. I often find inspiration from modern games and artistic platforms like ArtStation. Additionally, abandoned real-world structures and nature serve as important sources of inspiration for me.
Art-to-ExperienceI was involved in the project as the Art Director. The project was completed by three people. One of my teammates, who is also my Co-Founder, Begüm Dobrucali, was responsible for modeling and texturing the assets in the pack.She first created high-poly meshes in Blender, then baked them down to low-poly versions. For texturing, she used Substance 3D Painter. Once the asset and texture creation were complete, we asked another team member, Sude Kömür, to create the level design using the assets, based on the references we gathered. We stayed in close communication throughout this process and managed to create a compelling level. Afterward, we moved on to lighting. Since it was a desert environment, we used warm-toned directional lighting to illuminate the scene.The main challenge was placing the level elements correctly and ensuring everything was scaled properly to human proportions. Otherwise, the pack would have looked inconsistent.Using Emperia's Creator Tools plug-in was a smooth and enjoyable experience for me. The plug-in came with clear and detailed guidelines, which allowed me to follow each step carefully and complete the necessary tasks with ease. This comprehensive documentation made the entire process much faster and more efficient on my end. Overall, thanks to its user-friendly interface and logical workflow, working with the plugin was straightforward and significantly simplified my work.Thoughts on the Digital Art IndustryOne of the major issues in the industry is how quickly content is consumed and how often the artist's effort goes unrecognized. Additionally, the uncontrolled use of AI-generated content poses a threat to many artists. I believe we need more respect and recognition for the craft and the people behind it.Advice for BeginnersBeginners should focus on building a solid foundation and maintaining patience throughout the learning process. They should pay special attention to asset creation, level design, and lighting. I recommend starting with free resources on YouTube, and once they reach a certain level, diving deeper into paid courses for more advanced learning.Additionally, working on small but achievable projects is very important for growth.JustBStudios TeamInterview conducted by Theodore McKenzie
    2 Commentaires ·0 Parts
Plus de résultats
ollo https://www.ollo.ws