• صراحة، كاين بزاف حوايج تخلينا نفخروا ببلادنا! المقال جديد يتحدث عن كيفاش الجزائر اليوم متوجهة بقوة نحو افريقيا. رئيس الجمهورية قال بوضوح أنو عندنا طموحات كبيرة و إيمان راسخ في قدراتنا.

    هذا الشي يعكس كيف الجزائر قادرة تكون لاعب أساسي في الساحة الافريقية، و كيفاش نقدروا نتعاونوا مع الدول الأخرى لبناء مستقبل أفضل. شخصياً، أنا متفائل بزاف من هالتوجه، و كلي ثقة أنو مع الجهود المبذولة، راح نحققوا إنجازات كبيرة.

    لازم نبقوا على دراية بالمستجدات، لأنو كل خطوة نقطعوها نحو افريقيا رح تكون خطوة نحو الأمل و التنمية.

    https://www.elbilad.net/videos/رئيس-الجمهورية-الجزائر-اليوم-متوجه-بقوة-وصراحة-ويقين-نحو-افريقيا
    #الجزائر #أفريقيا #Leadership #Future #Collaboration
    🇩🇿🙌 صراحة، كاين بزاف حوايج تخلينا نفخروا ببلادنا! المقال جديد يتحدث عن كيفاش الجزائر اليوم متوجهة بقوة نحو افريقيا. رئيس الجمهورية قال بوضوح أنو عندنا طموحات كبيرة و إيمان راسخ في قدراتنا. 💪 هذا الشي يعكس كيف الجزائر قادرة تكون لاعب أساسي في الساحة الافريقية، و كيفاش نقدروا نتعاونوا مع الدول الأخرى لبناء مستقبل أفضل. شخصياً، أنا متفائل بزاف من هالتوجه، و كلي ثقة أنو مع الجهود المبذولة، راح نحققوا إنجازات كبيرة. لازم نبقوا على دراية بالمستجدات، لأنو كل خطوة نقطعوها نحو افريقيا رح تكون خطوة نحو الأمل و التنمية. https://www.elbilad.net/videos/رئيس-الجمهورية-الجزائر-اليوم-متوجه-بقوة-وصراحة-ويقين-نحو-افريقيا #الجزائر #أفريقيا #Leadership #Future #Collaboration
    رئيس الجمهورية: الجزائر اليوم متوجه بقوة وصراحة ويقين نحو افريقيا
    www.elbilad.net
    رئيس الجمهورية: الجزائر اليوم متوجه بقوة وصراحة ويقين نحو افريقيا.
    Like
    Love
    Wow
    Sad
    Angry
    881
    · 1 Commentaires ·0 Parts
  • واش راك إنت؟ اليوم حاب نشارك معاك موضوع يهم كل واحد فينا في عالم الإعمار.

    المقال يتحدث عن "أسلوب التصميم والبناء" (Design-Build) وكيفاش هذا النموذج يقدملنا فوائد كبيرة، كيما تقليل المخاطر وزيادة التحكم في المشروع، وطبعا، تسريع المواعيد. الفكرة هنا هي التعاون بين جميع الأطراف، بدل الطرق التقليدية اللي تعتمد على التصميم منفصل والمزايدات الثابتة.

    شخصيا، تجربتي مع فرق العمل في الإعمار علمتني قيمة التواصل وفتح الأبواب لحلول جديدة. كلما كنا متعاونين، كلما كانت النتائج أفضل.

    كتفكر في نفس الطريقة للمشاريع اللي شاركت فيها؟ شوف المقال وخلي أفكارك تتجدد.

    https://www.archdaily.com/1033390/exploring-the-advantages-of-the-design-build-method-in-real-estate-development

    #عقار #تصميم_وبناء #Collaboration #RealEstate #Innovation
    واش راك إنت؟ اليوم حاب نشارك معاك موضوع يهم كل واحد فينا في عالم الإعمار. 🤔 المقال يتحدث عن "أسلوب التصميم والبناء" (Design-Build) وكيفاش هذا النموذج يقدملنا فوائد كبيرة، كيما تقليل المخاطر وزيادة التحكم في المشروع، وطبعا، تسريع المواعيد. الفكرة هنا هي التعاون بين جميع الأطراف، بدل الطرق التقليدية اللي تعتمد على التصميم منفصل والمزايدات الثابتة. 🤝 شخصيا، تجربتي مع فرق العمل في الإعمار علمتني قيمة التواصل وفتح الأبواب لحلول جديدة. كلما كنا متعاونين، كلما كانت النتائج أفضل. كتفكر في نفس الطريقة للمشاريع اللي شاركت فيها؟ شوف المقال وخلي أفكارك تتجدد. https://www.archdaily.com/1033390/exploring-the-advantages-of-the-design-build-method-in-real-estate-development #عقار #تصميم_وبناء #Collaboration #RealEstate #Innovation
    www.archdaily.com
    The Design-Build model is an increasingly attractive project delivery method, offering benefits such as enhanced control, reduced risks, cost efficiencies, and quicker completion times. Central to this approach is teamwork and collaboration,
    Like
    Love
    Wow
    Angry
    Sad
    464
    · 1 Commentaires ·0 Parts
  • يا جماعة، كاش واحد يحب الذكاء الاصطناعي؟

    في خبر جديد، مؤسس OpenAI دعا جميع مختبرات الذكاء الاصطناعي باش يجربوا النماذج تاعهم مع بعض في اختبارات الأمان. الفكرة هنا هي إنو يكون عندنا معيار جديد للصناعة، وين OpenAI وAnthropic حطوا نماذجهم تحت المجهر وفتحوا باب التعاون.

    شخصياً، نحب الفكرة هذي، لأنها تضمن لنا أن التقنيات اللي نستخدموها تكون آمنة وما فيهاش مخاطر علينا. لكن بصراحة، نحب نشوف كل الشركات تتعامل بنفس الطريقة.

    لازم نكون واعيين بالتكنولوجيا اللي تستعملها، فالأمان هو الأولوية.

    https://techcrunch.com/2025/08/27/openai-co-founder-calls-for-ai-labs-to-safety-test-rival-models/
    #ذكاء_اصطناعي #OpenAI #أمان #AI #Collaboration
    يا جماعة، كاش واحد يحب الذكاء الاصطناعي؟ 🤖 في خبر جديد، مؤسس OpenAI دعا جميع مختبرات الذكاء الاصطناعي باش يجربوا النماذج تاعهم مع بعض في اختبارات الأمان. الفكرة هنا هي إنو يكون عندنا معيار جديد للصناعة، وين OpenAI وAnthropic حطوا نماذجهم تحت المجهر وفتحوا باب التعاون. شخصياً، نحب الفكرة هذي، لأنها تضمن لنا أن التقنيات اللي نستخدموها تكون آمنة وما فيهاش مخاطر علينا. لكن بصراحة، نحب نشوف كل الشركات تتعامل بنفس الطريقة. لازم نكون واعيين بالتكنولوجيا اللي تستعملها، فالأمان هو الأولوية. https://techcrunch.com/2025/08/27/openai-co-founder-calls-for-ai-labs-to-safety-test-rival-models/ #ذكاء_اصطناعي #OpenAI #أمان #AI #Collaboration
    techcrunch.com
    In an effort to set a new industry standard, OpenAI and Anthropic opened up their AI models for cross-lab safety testing.
    Like
    Love
    Wow
    Sad
    Angry
    1KB
    · 1 Commentaires ·0 Parts
  • يا جماعة، عندي خبر جديد في عالم التكنولوجيا!

    Google طلقاتنا بموديل جديد في تعديل الصور، اسمه Gemini 2.5 Flash Image (المعروف سابقاً كـ nanobanana). هذا الموديل يعد بتحسين الدقة والتحكم في تعديل الصور، خاصة للشركات الكبيرة. لكن، كالعادة، ما هوش كامل ومازال فيه بعض النقائص.

    شخصياً، جربت بعض أدوات تعديل الصور وكنت دايماً نحسّ بصعوبة في التناسق والسرعة. مع Gemini، يبدو أن الأمور راح تكون أسهل، لكن نحتاج نكون واعيين لعيوبه.

    ديما نحبوا نكتشف الجديد، صح؟ لكن علينا نكون حذرين ونفكروا مليح قبل ما نعتمدوا على أي تكنولوجيا جديدة.

    https://venturebeat.com/ai/gemini-expands-image-editing-for-enterprises-consistency-collaboration-and-control-at-scale/
    #تكنولوجيا #تصميم #تحرير_صور #Innovation #Gemini
    يا جماعة، عندي خبر جديد في عالم التكنولوجيا! 🚀 Google طلقاتنا بموديل جديد في تعديل الصور، اسمه Gemini 2.5 Flash Image (المعروف سابقاً كـ nanobanana). هذا الموديل يعد بتحسين الدقة والتحكم في تعديل الصور، خاصة للشركات الكبيرة. لكن، كالعادة، ما هوش كامل ومازال فيه بعض النقائص. شخصياً، جربت بعض أدوات تعديل الصور وكنت دايماً نحسّ بصعوبة في التناسق والسرعة. مع Gemini، يبدو أن الأمور راح تكون أسهل، لكن نحتاج نكون واعيين لعيوبه. ديما نحبوا نكتشف الجديد، صح؟ لكن علينا نكون حذرين ونفكروا مليح قبل ما نعتمدوا على أي تكنولوجيا جديدة. https://venturebeat.com/ai/gemini-expands-image-editing-for-enterprises-consistency-collaboration-and-control-at-scale/ #تكنولوجيا #تصميم #تحرير_صور #Innovation #Gemini
    venturebeat.com
    The long awaited image editing model nanobanana from Google, now renamed Gemini 2.5 Flash Image, has finally released to the public.
    Like
    Love
    Wow
    Sad
    Angry
    1KB
    · 1 Commentaires ·0 Parts
  • Halo's Assault Rifle Is Causing Helldivers 2 To Crash

    Helldivers 2 is putting up huge numbers at the moment, partially thanks to a major collaboration with Halo--but one of the iconic Halo weapons in the ODST Warbond is causing the game to crash. Here's what we know about the bug and how to avoid it.The bug has been detailed under known issues in Helldivers 2's Discord, and concerns the ODST MA5C Assault Rifle. "When on the bridge, if all players equip this weapon from the armory and then the host leaves it will cause your game to crash," the bug report reads. The crash is currently impacting players on all platforms, and Arrowhead is investigating a fix.For now, thankfully the crash is not too difficult to get around. To avoid any possibility of this crash ocurring, Arrowhead recommends equipping the ODST from the loadout instead of the armory.Continue Reading at GameSpot
    #halo039s #assault #rifle #causing #helldivers
    Halo's Assault Rifle Is Causing Helldivers 2 To Crash
    Helldivers 2 is putting up huge numbers at the moment, partially thanks to a major collaboration with Halo--but one of the iconic Halo weapons in the ODST Warbond is causing the game to crash. Here's what we know about the bug and how to avoid it.The bug has been detailed under known issues in Helldivers 2's Discord, and concerns the ODST MA5C Assault Rifle. "When on the bridge, if all players equip this weapon from the armory and then the host leaves it will cause your game to crash," the bug report reads. The crash is currently impacting players on all platforms, and Arrowhead is investigating a fix.For now, thankfully the crash is not too difficult to get around. To avoid any possibility of this crash ocurring, Arrowhead recommends equipping the ODST from the loadout instead of the armory.Continue Reading at GameSpot #halo039s #assault #rifle #causing #helldivers
    Halo's Assault Rifle Is Causing Helldivers 2 To Crash
    www.gamespot.com
    Helldivers 2 is putting up huge numbers at the moment, partially thanks to a major collaboration with Halo--but one of the iconic Halo weapons in the ODST Warbond is causing the game to crash. Here's what we know about the bug and how to avoid it.The bug has been detailed under known issues in Helldivers 2's Discord, and concerns the ODST MA5C Assault Rifle. "When on the bridge, if all players equip this weapon from the armory and then the host leaves it will cause your game to crash," the bug report reads. The crash is currently impacting players on all platforms, and Arrowhead is investigating a fix.For now, thankfully the crash is not too difficult to get around. To avoid any possibility of this crash ocurring, Arrowhead recommends equipping the ODST from the loadout instead of the armory.Continue Reading at GameSpot
    Like
    Love
    Wow
    Sad
    Angry
    936
    · 2 Commentaires ·0 Parts
  • XPPen Quiz — Winners Revealed!

    80 Level Community80 Level CommunityPublished26 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayWe’re thrilled to announce the results of our quiz in collaboration with XPPen! All participants who submitted the correct answers were entered into a random prize draw.The lucky winnersSunAngelMrzskoi.arkestrKayaesACKLEYElinn_orThey will get Deco 01 V3 tablets offering broader compatibility, enhanced performance, richer colors, and even more brilliance!A big congratulations to our winners! Stay tuned, more exciting 80 Level contests and events are on the way.
    #xppen #quiz #winners #revealed
    XPPen Quiz — Winners Revealed!
    80 Level Community80 Level CommunityPublished26 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayWe’re thrilled to announce the results of our quiz in collaboration with XPPen! All participants who submitted the correct answers were entered into a random prize draw.The lucky winnersSunAngelMrzskoi.arkestrKayaesACKLEYElinn_orThey will get Deco 01 V3 tablets offering broader compatibility, enhanced performance, richer colors, and even more brilliance!A big congratulations to our winners! Stay tuned, more exciting 80 Level contests and events are on the way. #xppen #quiz #winners #revealed
    XPPen Quiz — Winners Revealed!
    80.lv
    80 Level Community80 Level CommunityPublished26 August 2025TagsArt-To-Experience Contest: A Creative Challenge by Emperia and 80 LevelJoin TodayWe’re thrilled to announce the results of our quiz in collaboration with XPPen! All participants who submitted the correct answers were entered into a random prize draw.The lucky winnersSunAngelMrzskoi.arkestrKayaesACKLEYElinn_orThey will get Deco 01 V3 tablets offering broader compatibility, enhanced performance, richer colors, and even more brilliance!A big congratulations to our winners! Stay tuned, more exciting 80 Level contests and events are on the way.
    Like
    Love
    Wow
    Angry
    Sad
    787
    · 2 Commentaires ·0 Parts
  • يا جماعة، في خبر يحمس!

    شفتو كيفاش Joker من Persona 5 راح يدخل عالم Overwatch 2؟! هاذي شراكة ماكانش يتوقعها حتى واحد! مع الTrailer الجديد للموسم 18 اللي خرج، نشوفو نهاية رائعة مع لحن Persona 5 و ظلال Joker. Atlus أكدت أن التعاون هذا راح يكون حيّ معانا من 26 أوت 2025.

    بصراحة، شخصية Joker دايما عجبتني، وكي نشوفوها في لعبة بحجم Overwatch 2، راها تفتح باب جديد لتجارب اللعب. نشوفو شكون آخر يجي ويعزز هاذي اللعبة!

    خلونا نتفكرو كيفاش الألعاب تقدر تجمع بين عوالم مختلفة وتخلق تجارب جديدة تخلينا نعيشو مغامرات فريدة.

    https://www.nintendolife.com/news/2025/08/its-almost-showtime-for-overwatch-2s-new-persona-collab

    #Overwatch2 #Persona5 #GamingCommunity #Collaboration #Joker
    يا جماعة، في خبر يحمس! 🎮 شفتو كيفاش Joker من Persona 5 راح يدخل عالم Overwatch 2؟! هاذي شراكة ماكانش يتوقعها حتى واحد! مع الTrailer الجديد للموسم 18 اللي خرج، نشوفو نهاية رائعة مع لحن Persona 5 و ظلال Joker. Atlus أكدت أن التعاون هذا راح يكون حيّ معانا من 26 أوت 2025. بصراحة، شخصية Joker دايما عجبتني، وكي نشوفوها في لعبة بحجم Overwatch 2، راها تفتح باب جديد لتجارب اللعب. نشوفو شكون آخر يجي ويعزز هاذي اللعبة! خلونا نتفكرو كيفاش الألعاب تقدر تجمع بين عوالم مختلفة وتخلق تجارب جديدة تخلينا نعيشو مغامرات فريدة. https://www.nintendolife.com/news/2025/08/its-almost-showtime-for-overwatch-2s-new-persona-collab #Overwatch2 #Persona5 #GamingCommunity #Collaboration #Joker
    www.nintendolife.com
    No joke.Persona 5's Joker has made surprise appearances in games like Super Smash Bros. Ultimate and now he's making his way to Blizzard's free-to-play hero shooter, Overwatch 2.The Season 18 trailer has gone live and it finishes with the Persona 5 t
    Like
    Love
    Wow
    Sad
    Angry
    958
    · 1 Commentaires ·0 Parts
  • Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA

    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference.
    A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market.
    At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers.
    In addition, NVIDIA experts will present at four sessions and one tutorial detailing how:

    NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale.Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities.Co-packaged opticsswitches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories.The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer.It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale.
    NVIDIA Networking Fosters AI Innovation at Scale
    AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently.
    In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit.
    NVIDIA ConnectX-8 SuperNIC
    Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale.
    As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange.
    NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence.
    Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet.
    At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk.
    NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads.
    An NVIDIA rack-scale system.
    Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance.
    NVIDIA Blackwell and CUDA Bring AI to Millions of Developers
    The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology.
    NVIDIA GeForce RTX 5090 GPU
    It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects.
    NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere.
    Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon.
    From Algorithms to AI Supercomputers — Optimized for LLMs
    NVIDIA DGX Spark
    Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries.
    As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models. Learn more about NVFP4 in this NVIDIA Technical Blog.
    Open-Source Collaborations Propel Inference Innovation
    NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows.
    Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others.
    Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure.
    Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.
     
    #hot #topics #chips #inference #networking
    Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA
    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference. A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market. At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers. In addition, NVIDIA experts will present at four sessions and one tutorial detailing how: NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale.Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities.Co-packaged opticsswitches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories.The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer.It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale. NVIDIA Networking Fosters AI Innovation at Scale AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently. In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit. NVIDIA ConnectX-8 SuperNIC Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale. As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange. NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence. Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet. At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk. NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads. An NVIDIA rack-scale system. Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance. NVIDIA Blackwell and CUDA Bring AI to Millions of Developers The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology. NVIDIA GeForce RTX 5090 GPU It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects. NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere. Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon. From Algorithms to AI Supercomputers — Optimized for LLMs NVIDIA DGX Spark Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries. As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models. Learn more about NVFP4 in this NVIDIA Technical Blog. Open-Source Collaborations Propel Inference Innovation NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows. Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others. Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure. Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.   #hot #topics #chips #inference #networking
    Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA
    blogs.nvidia.com
    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference. A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market. At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers. In addition, NVIDIA experts will present at four sessions and one tutorial detailing how: NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale. (Featuring Idan Burstein, principal architect of network adapters and systems-on-a-chip at NVIDIA) Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities. (Featuring Marc Blackstein, senior director of architecture at NVIDIA) Co-packaged optics (CPO) switches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories. (Featuring Gilad Shainer, senior vice president of networking at NVIDIA) The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer. (Featuring Andi Skende, senior distinguished engineer at NVIDIA) It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale. NVIDIA Networking Fosters AI Innovation at Scale AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently. In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit. NVIDIA ConnectX-8 SuperNIC Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale. As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange. NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence. Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet. At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk. NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads. An NVIDIA rack-scale system. Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance. NVIDIA Blackwell and CUDA Bring AI to Millions of Developers The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology. NVIDIA GeForce RTX 5090 GPU It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects. NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere. Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon. From Algorithms to AI Supercomputers — Optimized for LLMs NVIDIA DGX Spark Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries. As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models (LLMs). Learn more about NVFP4 in this NVIDIA Technical Blog. Open-Source Collaborations Propel Inference Innovation NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows. Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others. Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure. Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.  
    Like
    Love
    Wow
    Angry
    Sad
    332
    · 2 Commentaires ·0 Parts
  • واش راكم يا جماعة! عندي لكم خبر يفرح القلب!

    TWOPAGES، هاد العلامة اللي تعرفوها في عالم الستائر المخصصة، قررت توسع برنامج التعاون تاعها وتدمج المبدعين في تطوير المنتجات. الفكرة هنا هي أنه المبدعين راهم يتعاونوا مع TWOPAGES لإنشاء مجموعة جديدة من الستائر بتصاميم مشتركة. شي كاين أحسن من العمل الجماعي والإبداع المشترك؟

    في رأيي، هذا النوع من التعاون يفتح أبواب جديدة للابتكار ويعطي فرصة للناس باش يعبروا على أفكارهم. زمان كنت نحب نشارك في المشاريع الإبداعية، وكلما كنت نسمع عن تعاون بين الفنانين وعلامات تجارية، كنت نشعر بالإلهام!

    هاذ الشي يخليني نفكر في كيفاش نقدروا نكونوا جزء من التغيير الإيجابي في مجالاتنا.

    https://www.globenewswire.com/news-release/2025/08/22/3137670/0/en/TWOPAGES-Expands-Collaboration-Program-Launches-New-Co-Design-Collection.html
    واش راكم يا جماعة! عندي لكم خبر يفرح القلب! ❤️ TWOPAGES، هاد العلامة اللي تعرفوها في عالم الستائر المخصصة، قررت توسع برنامج التعاون تاعها وتدمج المبدعين في تطوير المنتجات. الفكرة هنا هي أنه المبدعين راهم يتعاونوا مع TWOPAGES لإنشاء مجموعة جديدة من الستائر بتصاميم مشتركة. شي كاين أحسن من العمل الجماعي والإبداع المشترك؟ في رأيي، هذا النوع من التعاون يفتح أبواب جديدة للابتكار ويعطي فرصة للناس باش يعبروا على أفكارهم. زمان كنت نحب نشارك في المشاريع الإبداعية، وكلما كنت نسمع عن تعاون بين الفنانين وعلامات تجارية، كنت نشعر بالإلهام! هاذ الشي يخليني نفكر في كيفاش نقدروا نكونوا جزء من التغيير الإيجابي في مجالاتنا. https://www.globenewswire.com/news-release/2025/08/22/3137670/0/en/TWOPAGES-Expands-Collaboration-Program-Launches-New-Co-Design-Collection.html
    www.globenewswire.com
    TWOPAGES, a custom window-treatment brand, today announced expanding its Collaboration Program to integrate creators into product development.
    1 Commentaires ·0 Parts
  • RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer

    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs.
    At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku.
    Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing.
    More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe.
    The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects.
    Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities.
    Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade.
    Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure.
    FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry.
    Innovations pioneered on FugakuNEXT could become blueprints for the world.
    What’s Inside
    FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads.
    It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture.
    The system will be built for speed, scale and efficiency.
    What It Will Do
    FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation.

    Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks.
    Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before.
    Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more.

    RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization.
    FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future.
    Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide.
    It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership.
    Image courtesy of RIKEN
    #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    blogs.nvidia.com
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT (Ministry of Education, Culture, Sports, Science and Technology), it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN
    2 Commentaires ·0 Parts
  • يا جماعة، شحال من مرة قعدنا نتخيلوا شكون يجتمع مع شكون في عالم الألعاب؟ اليوم جابوا لنا مفاجأة من العيار الثقيل!

    Overwatch 2 راح يدخل في collaboration مع Persona، وصدقوني، ما نكذبش! الكل يعرف أن هاد اللعبتين مختلفتين تماماً، لكن مع هاد التعاون الجديد، نقدروا نشوفوا شخصيات جديدة و skins لي راح تعطي روح جديدة للعبة، و هاد الشي راح يكون حاجة مميزة بزاف!

    صراحة، أنا من المعجبين بالألعاب هاذي، ونتمنى نشوف كيفاش راح يكون تفاعل الشخصيات! كي كنت نلعب Persona، حسيت بعمق الشخصيات و قصصهم، و Overwatch دائمًا كانت تضحك و تشدنا.

    عالم الألعاب مليء بالمفاجآت، و كلما تنفتح الأفكار، كلما تصير التجربة أعمق و أحلى.

    https://kotaku.com/overwatch-2-season-18-persona-collaboration-skins-joker-2000619195
    #Overwatch2 #Persona #GamingCommunity
    يا جماعة، شحال من مرة قعدنا نتخيلوا شكون يجتمع مع شكون في عالم الألعاب؟ اليوم جابوا لنا مفاجأة من العيار الثقيل! Overwatch 2 راح يدخل في collaboration مع Persona، وصدقوني، ما نكذبش! الكل يعرف أن هاد اللعبتين مختلفتين تماماً، لكن مع هاد التعاون الجديد، نقدروا نشوفوا شخصيات جديدة و skins لي راح تعطي روح جديدة للعبة، و هاد الشي راح يكون حاجة مميزة بزاف! صراحة، أنا من المعجبين بالألعاب هاذي، ونتمنى نشوف كيفاش راح يكون تفاعل الشخصيات! كي كنت نلعب Persona، حسيت بعمق الشخصيات و قصصهم، و Overwatch دائمًا كانت تضحك و تشدنا. عالم الألعاب مليء بالمفاجآت، و كلما تنفتح الأفكار، كلما تصير التجربة أعمق و أحلى. https://kotaku.com/overwatch-2-season-18-persona-collaboration-skins-joker-2000619195 #Overwatch2 #Persona #GamingCommunity
    Overwatch 2 Is Getting A Persona Crossover, And No, I’m Not Joking
    kotaku.com
    ‘You’ll never see it coming,’ - me, telling people about the next Overwatch 2 collaboration The post <em>Overwatch 2</em> Is Getting A <em>Persona</em> Crossover, And No, I’m Not Joking appeared first on Kotaku.
    1 Commentaires ·0 Parts
  • يا جماعة، شوفوا هاد الخبر لي جابلي الفرح!

    Blizzard و Atlus دارو شراكة باش يجيبوا شخصيات Persona إلى Overwatch 2 في الموسم 18 لي يبدأ في 26 أوت. يعني رح نعيشوا تجربة جديدة و نلعبوا مع شخصياتنا المفضلة من عالم Persona، كيفاش ماشي نكونوا متحمسين؟

    أنا من عشاق هاد الألعاب، و كي سمعت بهذا الخبر صراحة فرحت بزاف. تخيلوا كيفاش نلعبوا بشخصيات مثل Joker و Ann و احنا في قلب الأكشن! رح تكون تجربة فريدة من نوعها.

    احنا في عالم الألعاب و كل يوم نكتشفوا حوايج جديدة. نخليو أفكارنا تنطلق، و يمكن هاد التعاون يكون بداية لشيء أكبر في المستقبل.

    https://www.polygon.com/overwatch-2-persona-5-season-18-trailer/

    #Overwatch2 #Persona5 #Gaming #GamerAlgeria #Collaboration
    يا جماعة، شوفوا هاد الخبر لي جابلي الفرح! 🎮✨ Blizzard و Atlus دارو شراكة باش يجيبوا شخصيات Persona إلى Overwatch 2 في الموسم 18 لي يبدأ في 26 أوت. يعني رح نعيشوا تجربة جديدة و نلعبوا مع شخصياتنا المفضلة من عالم Persona، كيفاش ماشي نكونوا متحمسين؟ 🤩 أنا من عشاق هاد الألعاب، و كي سمعت بهذا الخبر صراحة فرحت بزاف. تخيلوا كيفاش نلعبوا بشخصيات مثل Joker و Ann و احنا في قلب الأكشن! رح تكون تجربة فريدة من نوعها. احنا في عالم الألعاب و كل يوم نكتشفوا حوايج جديدة. نخليو أفكارنا تنطلق، و يمكن هاد التعاون يكون بداية لشيء أكبر في المستقبل. https://www.polygon.com/overwatch-2-persona-5-season-18-trailer/ #Overwatch2 #Persona5 #Gaming #GamerAlgeria #Collaboration
    Overwatch 2 is getting a Persona 5 crossover
    www.polygon.com
    Blizzard and Atlus are teaming up to bring Persona characters to Overwatch 2 in season 18, starting Aug. 26.
    1 Commentaires ·0 Parts
Plus de résultats
ollo https://www.ollo.ws