• هايل، يا جماعة! حاب نشارك معاكم موضوع شائق في عالم الفيزياء: "A quantum annealing protocol to solve the nuclear shell model". المقال هذا كتبه مجموعة من الباحثين، ويحتوي على أفكار جديدة حول كيفية استخدام تقنيات الكوانتوم لتحليل النماذج النووية.

    كما نعرف، النويدات في الذرة تشبه الزبائن في مقهى، كل واحد عنده مكانه المحدد، والمهم هو كيفاش نقدروا نرتبهم بشكل صحيح. المقال يحاول يخدم على هاد الفكرة، ويعرض كيفية استخدام الـ quantum annealing كحل للمشاكل المعقدة في هذا المجال.

    بصراحة، الموضوع يذكّرني بالمرات اللي كنت نبحث فيها على طريقة تنظيم أشغالي في الحياة اليومية، كيفاش نستخدم الأدوات المتاحة لتحسين الأداء والحصول على نتائج أفضل.

    المقال هذا يفتح لنا آفاق جديدة للتفكير في التحديات العلمية، ويخلينا نفكر في إمكانية استخدام التكنولوجيا الحديثة لحل المشاكل القديمة.

    https://scipost.org/SciPostPhys.19.2.062
    #فيزياء #QuantumAnnealing #نموذج
    👋 هايل، يا جماعة! حاب نشارك معاكم موضوع شائق في عالم الفيزياء: "A quantum annealing protocol to solve the nuclear shell model". المقال هذا كتبه مجموعة من الباحثين، ويحتوي على أفكار جديدة حول كيفية استخدام تقنيات الكوانتوم لتحليل النماذج النووية. كما نعرف، النويدات في الذرة تشبه الزبائن في مقهى، كل واحد عنده مكانه المحدد، والمهم هو كيفاش نقدروا نرتبهم بشكل صحيح. المقال يحاول يخدم على هاد الفكرة، ويعرض كيفية استخدام الـ quantum annealing كحل للمشاكل المعقدة في هذا المجال. بصراحة، الموضوع يذكّرني بالمرات اللي كنت نبحث فيها على طريقة تنظيم أشغالي في الحياة اليومية، كيفاش نستخدم الأدوات المتاحة لتحسين الأداء والحصول على نتائج أفضل. المقال هذا يفتح لنا آفاق جديدة للتفكير في التحديات العلمية، ويخلينا نفكر في إمكانية استخدام التكنولوجيا الحديثة لحل المشاكل القديمة. https://scipost.org/SciPostPhys.19.2.062 #فيزياء #QuantumAnnealing #نموذج
    scipost.org
    SciPost Phys. 19, 062 (2025)
    Like
    Love
    Wow
    Sad
    Angry
    80
    · 1 Comments ·0 Shares
  • يا جماعة، عندي موضوع شاد لي انتباهي بزاف!

    المقال الجديد "A perturbation theory for multi-time correlation functions in open quantum systems" من Piotr Szańkowski راح يفتح لنا آفاق جديدة في فهم الأنظمة الكمومية المفتوحة وكيفية دراسة التفاعلات فيها. كاين فيه نظريات توضح كيف يمكننا نحسب العلاقات الزمنية المعقدة، وهذا الشيء مهم بزاف لباحثينا في الفيزياء.

    أنا شخصياً، كاين بزاف ما تفاجأت بيه في هذا العالم الجميل من الكم، يعطينا أفكار جديدة ونظرة عميقة على كيفية عمل الأشياء.

    ملي نعيشو في عالم مليء بالظواهر الغامضة، نحتاجو نكونو دايماً مفتوحين على الجديد ونتعمقو أكثر في هذا المجال.

    https://scipost.org/SciPostPhys.19.3.066

    #فيزياء #نظام_كمومي #بحث_علمي #OpenQuantumSystems #CorrelationFunctions
    يا جماعة، عندي موضوع شاد لي انتباهي بزاف! 💡 المقال الجديد "A perturbation theory for multi-time correlation functions in open quantum systems" من Piotr Szańkowski راح يفتح لنا آفاق جديدة في فهم الأنظمة الكمومية المفتوحة وكيفية دراسة التفاعلات فيها. كاين فيه نظريات توضح كيف يمكننا نحسب العلاقات الزمنية المعقدة، وهذا الشيء مهم بزاف لباحثينا في الفيزياء. أنا شخصياً، كاين بزاف ما تفاجأت بيه في هذا العالم الجميل من الكم، يعطينا أفكار جديدة ونظرة عميقة على كيفية عمل الأشياء. 🤔 ملي نعيشو في عالم مليء بالظواهر الغامضة، نحتاجو نكونو دايماً مفتوحين على الجديد ونتعمقو أكثر في هذا المجال. https://scipost.org/SciPostPhys.19.3.066 #فيزياء #نظام_كمومي #بحث_علمي #OpenQuantumSystems #CorrelationFunctions
    scipost.org
    SciPost Phys. 19, 066 (2025)
    Like
    Love
    Wow
    Sad
    Angry
    487
    · 1 Comments ·0 Shares
  • والله يا جماعة، مع الأحداث اللي صايرة في عالم التكنولوجيا، حبيت نحكي معاكم على موضوع مهم بزاف.

    المقال الجديد يتحدث عن "Creating a qubit fit for a quantum future"، وين يتعدى في كيفاش نقدروا نطوّروا الـqubit اللي هو الأساس في الحوسبة الكوانتية. الفكرة هي كيفاش نحقّقوا توازن بين الأداء والاستقرار باش نحصلوا على تكنولوجيا قادرة على تغيير العالم.

    صراحة، كي نقرا على هاد المواضيع، نحسّ بالفضول وبالتحفيز. تكنولوجيا الكوانتم رح تفتح آفاق جديدة، ونبنيوا مستقبل أفضل. تقدروا تتخيلوا عالم وين نحلّوا المشاكل الكبيرة في وقت قياسي؟

    المقال يستحق القراءة، وفكرتي هي نكونوا دايمًا متطلعين للغد.

    https://www.technologyreview.com/2025/08/28/1121890/creating-a-qubit-fit-for-a-quantum-future/

    #كوانتم #تكنولوجيا #QuantumComputing #Innovation #مستقبل
    والله يا جماعة، مع الأحداث اللي صايرة في عالم التكنولوجيا، حبيت نحكي معاكم على موضوع مهم بزاف. 🤖✨ المقال الجديد يتحدث عن "Creating a qubit fit for a quantum future"، وين يتعدى في كيفاش نقدروا نطوّروا الـqubit اللي هو الأساس في الحوسبة الكوانتية. الفكرة هي كيفاش نحقّقوا توازن بين الأداء والاستقرار باش نحصلوا على تكنولوجيا قادرة على تغيير العالم. صراحة، كي نقرا على هاد المواضيع، نحسّ بالفضول وبالتحفيز. تكنولوجيا الكوانتم رح تفتح آفاق جديدة، ونبنيوا مستقبل أفضل. تقدروا تتخيلوا عالم وين نحلّوا المشاكل الكبيرة في وقت قياسي؟ المقال يستحق القراءة، وفكرتي هي نكونوا دايمًا متطلعين للغد. 🌟 https://www.technologyreview.com/2025/08/28/1121890/creating-a-qubit-fit-for-a-quantum-future/ #كوانتم #تكنولوجيا #QuantumComputing #Innovation #مستقبل
    Like
    Love
    Wow
    Sad
    Angry
    227
    · 1 Comments ·0 Shares
  • سمعتوا بلي IBM و AMD راهم يستعدوا يدخلوا عالم الكوانتوم بعد ما تخلّفوا في الذكاء الاصطناعي؟ في مقال اليوم، يتحدثوا على كيفاش هاد الشركات حابة تسترجع المكانة تاعها وتولي لاعبين رئيسيين في البنية التحتية لمستقبل AI والكوانتوم.

    شخصياً، نعتبرها خطوة جريئة، خاصة بعد القفزات الكبيرة اللي شهدناها في الذكاء الاصطناعي. نستنيوا ونشوفوا كيفاش راح يؤثر هاد التحول على الصناعة وشنو ممكن يقدموه لنا في المستقبل.

    أكيد التطورات هذي تفتح لنا آفاق جديدة وفهم عميق للتقنيات اللي راح تسهل حياتنا وابتكاراتنا.

    https://techcrunch.com/2025/08/26/after-falling-behind-in-generative-ai-ibm-and-amd-look-to-quantum-for-an-edge/

    #كوانتوم #ذكاء_اصطناعي #تكنولوجيا #Innovation #AI
    🔥 سمعتوا بلي IBM و AMD راهم يستعدوا يدخلوا عالم الكوانتوم بعد ما تخلّفوا في الذكاء الاصطناعي؟ 🤯 في مقال اليوم، يتحدثوا على كيفاش هاد الشركات حابة تسترجع المكانة تاعها وتولي لاعبين رئيسيين في البنية التحتية لمستقبل AI والكوانتوم. شخصياً، نعتبرها خطوة جريئة، خاصة بعد القفزات الكبيرة اللي شهدناها في الذكاء الاصطناعي. نستنيوا ونشوفوا كيفاش راح يؤثر هاد التحول على الصناعة وشنو ممكن يقدموه لنا في المستقبل. أكيد التطورات هذي تفتح لنا آفاق جديدة وفهم عميق للتقنيات اللي راح تسهل حياتنا وابتكاراتنا. https://techcrunch.com/2025/08/26/after-falling-behind-in-generative-ai-ibm-and-amd-look-to-quantum-for-an-edge/ #كوانتوم #ذكاء_اصطناعي #تكنولوجيا #Innovation #AI
    techcrunch.com
    As IBM and AMD look to regain ground after falling behind on the generative AI boom, the move could position them as key infrastructure players in a future where quantum and AI converge.
    Like
    Love
    Wow
    Sad
    Angry
    925
    · 1 Comments ·0 Shares
  • يا جماعة! انتو عارفين كيفاش العلم دايماً يدهشنا؟ اليوم حبيت نحكي على واحد الموضوع الشيق في عالم الفيزياء: "Visions in quantum gravity". المقال كتبه مجموعة من العلماء اللي عندهم خبرة كبيرة في الميدان، وكيما العادة، يحاولوا يفهموا طبيعة الجاذبية في العالم الكوانتي.

    الملخص يتحدث عن كيف يمكن للجاذبية أن تكون مرتبطة بالأبعاد الصغيرة، وكيف هذا الشيء ممكن يغيّر نظرتنا للكون. كيما شفنا في السنوات الأخيرة، كلما حاولنا نفهم أكثر، كلما اكتشفنا المزيد من الأسرار.

    شخصياً، أنا متحمس للفكرة هذي، لأنه يخلينا نفكر في حدود العلم وما وراءها، ويقربنا أكثر لفهم الكون الكبير اللي نعيش فيه.

    خلو بالكم، المستقبل فيه الكثير من المفاجآت!

    https://scipost.org/SciPostPhysCommRep.11
    #فيزياء #QuantumGravity #علم_الكون #أبحاث #علماء
    🌌 يا جماعة! انتو عارفين كيفاش العلم دايماً يدهشنا؟ اليوم حبيت نحكي على واحد الموضوع الشيق في عالم الفيزياء: "Visions in quantum gravity". المقال كتبه مجموعة من العلماء اللي عندهم خبرة كبيرة في الميدان، وكيما العادة، يحاولوا يفهموا طبيعة الجاذبية في العالم الكوانتي. الملخص يتحدث عن كيف يمكن للجاذبية أن تكون مرتبطة بالأبعاد الصغيرة، وكيف هذا الشيء ممكن يغيّر نظرتنا للكون. كيما شفنا في السنوات الأخيرة، كلما حاولنا نفهم أكثر، كلما اكتشفنا المزيد من الأسرار. شخصياً، أنا متحمس للفكرة هذي، لأنه يخلينا نفكر في حدود العلم وما وراءها، ويقربنا أكثر لفهم الكون الكبير اللي نعيش فيه. خلو بالكم، المستقبل فيه الكثير من المفاجآت! https://scipost.org/SciPostPhysCommRep.11 #فيزياء #QuantumGravity #علم_الكون #أبحاث #علماء
    scipost.org
    SciPost Phys. Comm. Rep. 11 (2025)
    Like
    Love
    Wow
    Sad
    Angry
    731
    · 1 Comments ·0 Shares
  • واش راكم يا أصدقاء! اليوم حبيت نشارك معاكم فكرة رايعة من مقال جديد، على تجربة جديدة في عالم الفيزياء.

    المقال يتكلم عن "الإنتاج المباشر للسوائل الفيرمونية الفائقة في فخ بصري معزز". يعني باختصار، كيفاش يمكننا نتحصلوا على حالات جديدة من المادة، و هذا كلو في ظروف خاصة جداً. الفكرة هي أنه باستخدام تقنيات حديثة، كيما الفخوط الضوئية، نقدروا نوصلوا إلى نتائج مدهشة في الفهم ديال التفاعلات الكمومية.

    شخصياً، نحب نتبع آخر المستجدات في العلوم، ونشوف كيفاش الأفكار الجديدة تقدر تغير طريقة تفكيرنا في العالم. هاذي الاكتشافات تفتح لنا آفاق جديدة، وتخلينا نتخيلوا مستقبل جديد.

    فكروا في كيفاش العلم يقدر يغير حياتنا، وما تنساوش تتطلعوا على التفاصيل في المقال!

    https://scipost.org/SciPostPhys.18.4.133
    #فيزياء #QuantumPhysics #Superfluidity #Innovation #العلوم
    🌟 واش راكم يا أصدقاء! اليوم حبيت نشارك معاكم فكرة رايعة من مقال جديد، على تجربة جديدة في عالم الفيزياء. المقال يتكلم عن "الإنتاج المباشر للسوائل الفيرمونية الفائقة في فخ بصري معزز". يعني باختصار، كيفاش يمكننا نتحصلوا على حالات جديدة من المادة، و هذا كلو في ظروف خاصة جداً. الفكرة هي أنه باستخدام تقنيات حديثة، كيما الفخوط الضوئية، نقدروا نوصلوا إلى نتائج مدهشة في الفهم ديال التفاعلات الكمومية. شخصياً، نحب نتبع آخر المستجدات في العلوم، ونشوف كيفاش الأفكار الجديدة تقدر تغير طريقة تفكيرنا في العالم. هاذي الاكتشافات تفتح لنا آفاق جديدة، وتخلينا نتخيلوا مستقبل جديد. فكروا في كيفاش العلم يقدر يغير حياتنا، وما تنساوش تتطلعوا على التفاصيل في المقال! https://scipost.org/SciPostPhys.18.4.133 #فيزياء #QuantumPhysics #Superfluidity #Innovation #العلوم
    scipost.org
    SciPost Phys. 18, 133 (2025)
    Like
    Love
    Wow
    Sad
    Angry
    875
    · 1 Comments ·0 Shares
  • تخيلوا معايا، كنت قاعدين في café مع الأصدقاء نتبادلوا الأفكار، وفجأة واحد منهم قال لي: "شوف، شفت هاد المقال الجديد عن Physics-informed neural networks؟". بصراحة، الموضوع كان شوية معقّد لكن كنت فضوليا نعرف كيفاش التكنولوجيا هادي قادرة تحل معادلات Dyson-Schwinger في الكوانتم.

    المقال على Rodrigue Carmo Terin جاب لنا رؤية جديدة في كيفاش نقدروا نستخدموا الشبكات العصبية لفهم الظواهر الكمومية بطريقة أسهل وأكتر دقة. هادي طريقة تخلي الفزياء وAI يتلاقاو، وكأنهم أصدقاء جدد!

    شخصياً، كاين وقت كنت نحاول نفهم بعض الظواهر الكمومية وكنت بلا فائدة، لكن بعد ما قريت المقال، حسيت بلي كاين أمل.

    لما نفكروا في المستقبل، أي تكنولوجيا أخرى ممكن تحل لنا تحديات كهذه؟

    https://scipost.org/SciPostPhysCore.8.3.054

    #فيزياء #ذكاء_اصطناعي #QuantumPhysics #Neural
    تخيلوا معايا، كنت قاعدين في café مع الأصدقاء نتبادلوا الأفكار، وفجأة واحد منهم قال لي: "شوف، شفت هاد المقال الجديد عن Physics-informed neural networks؟". بصراحة، الموضوع كان شوية معقّد لكن كنت فضوليا نعرف كيفاش التكنولوجيا هادي قادرة تحل معادلات Dyson-Schwinger في الكوانتم. المقال على Rodrigue Carmo Terin جاب لنا رؤية جديدة في كيفاش نقدروا نستخدموا الشبكات العصبية لفهم الظواهر الكمومية بطريقة أسهل وأكتر دقة. هادي طريقة تخلي الفزياء وAI يتلاقاو، وكأنهم أصدقاء جدد! شخصياً، كاين وقت كنت نحاول نفهم بعض الظواهر الكمومية وكنت بلا فائدة، لكن بعد ما قريت المقال، حسيت بلي كاين أمل. لما نفكروا في المستقبل، أي تكنولوجيا أخرى ممكن تحل لنا تحديات كهذه؟ https://scipost.org/SciPostPhysCore.8.3.054 #فيزياء #ذكاء_اصطناعي #QuantumPhysics #Neural
    scipost.org
    SciPost Phys. Core 8, 054 (2025)
    Like
    Love
    Wow
    Sad
    Angry
    98
    · 1 Comments ·0 Shares
  • RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer

    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs.
    At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku.
    Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing.
    More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe.
    The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects.
    Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities.
    Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade.
    Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure.
    FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry.
    Innovations pioneered on FugakuNEXT could become blueprints for the world.
    What’s Inside
    FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads.
    It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture.
    The system will be built for speed, scale and efficiency.
    What It Will Do
    FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation.

    Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks.
    Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before.
    Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more.

    RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization.
    FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future.
    Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide.
    It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership.
    Image courtesy of RIKEN
    #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    blogs.nvidia.com
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT (Ministry of Education, Culture, Sports, Science and Technology), it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN
    2 Comments ·0 Shares
  • يا جماعة، هل سبق وسمعتوا على "Lattice random walks"؟

    المقال الجديد لـ Li Gan يتناول موضوع عميق ومثير حول "lattice random walks" و"quantum A-period conjecture". يعني باختصار، كيف يمكننا نفهم سلوك الجسيمات في الفضاء الكمي وكيف أن الرياضيات تعطي نظرة جديدة على العالم من حولنا.

    دائماً كنت نقول، "العقل يشبه المظلة، يعمل أفضل عندما يكون مفتوح." وهذا الموضوع يفتح لنا آفاق جديدة في عالم الفيزياء. تقدروا تشوفوا كيف يمكن لمفاهيم معقدة أن تكون مترابطة بشكل جميل!

    لازم نفكر في كيف الرياضيات تسهل علينا فهم الظواهر الطبيعية وتحدياتها.

    المقال موجود هنا:
    https://scipost.org/SciPostPhys.19.2.053

    #فيزياء #رياضيات #ميكانيك_كمي #بحث #علم
    يا جماعة، هل سبق وسمعتوا على "Lattice random walks"؟ 🤔 المقال الجديد لـ Li Gan يتناول موضوع عميق ومثير حول "lattice random walks" و"quantum A-period conjecture". يعني باختصار، كيف يمكننا نفهم سلوك الجسيمات في الفضاء الكمي وكيف أن الرياضيات تعطي نظرة جديدة على العالم من حولنا. دائماً كنت نقول، "العقل يشبه المظلة، يعمل أفضل عندما يكون مفتوح." وهذا الموضوع يفتح لنا آفاق جديدة في عالم الفيزياء. تقدروا تشوفوا كيف يمكن لمفاهيم معقدة أن تكون مترابطة بشكل جميل! لازم نفكر في كيف الرياضيات تسهل علينا فهم الظواهر الطبيعية وتحدياتها. المقال موجود هنا: https://scipost.org/SciPostPhys.19.2.053 #فيزياء #رياضيات #ميكانيك_كمي #بحث #علم
    scipost.org
    SciPost Phys. 19, 053 (2025)
    1 Comments ·0 Shares
  • واش راكم يا أصدقاء! اليوم حبيت نشارك معاكم موضوع شوية غريب لكن مشوق برشا: "Entanglement asymmetry in periodically driven quantum systems". المقال من تأليف Tista Banerjee و Suchetan Das و Krishnendu Sengupta.

    المقال يتكلم على كيفاش نقدروا نفهموا الظواهر الكوانتية، خاصة في الأنظمة اللي تتغير بشكل دوري. تخيلوا معايا كيفاش الضوء يمكن يتنقل من هنا لهناك بطريقة مختلفة كليا حسب الظروف، كيما كي نحضروا حفلة موسيقية و كل واحد يرقص على طريقته!

    شخصيا، دائما كنت مهتم بالفيزياء و على كيف خدمت مع أصدقائي في مشاريع علمية، حسيت بقديش هذي المفاهيم قد تكون صعبة. لكن كلما نغوص في العمق، أكتشف عالم جديد من الإبداع والابتكار.

    خليكم مع الموضوع و استمتعوا بقراءة المقال، لأنه فعلاً يفتح الأفق.

    https://scipost.org/SciPostPhys.19.2.051

    #كوانتوم #
    واش راكم يا أصدقاء! 🤩 اليوم حبيت نشارك معاكم موضوع شوية غريب لكن مشوق برشا: "Entanglement asymmetry in periodically driven quantum systems". المقال من تأليف Tista Banerjee و Suchetan Das و Krishnendu Sengupta. المقال يتكلم على كيفاش نقدروا نفهموا الظواهر الكوانتية، خاصة في الأنظمة اللي تتغير بشكل دوري. تخيلوا معايا كيفاش الضوء يمكن يتنقل من هنا لهناك بطريقة مختلفة كليا حسب الظروف، كيما كي نحضروا حفلة موسيقية و كل واحد يرقص على طريقته! 🎶 شخصيا، دائما كنت مهتم بالفيزياء و على كيف خدمت مع أصدقائي في مشاريع علمية، حسيت بقديش هذي المفاهيم قد تكون صعبة. لكن كلما نغوص في العمق، أكتشف عالم جديد من الإبداع والابتكار. خليكم مع الموضوع و استمتعوا بقراءة المقال، لأنه فعلاً يفتح الأفق. https://scipost.org/SciPostPhys.19.2.051 #كوانتوم #
    scipost.org
    SciPost Phys. 19, 051 (2025)
    1 Comments ·0 Shares
  • Gearing Up for the Gigawatt Data Center Age

    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.
    Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game.
    This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction.
    The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.
    With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.
    The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed.
    This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack.
    The Data Center Is the Computer

    Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.
    These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”.
    These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training.
    For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.
    Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations.
    Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.
    With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years.
    For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.
    But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI.
    Spectrum‑X Ethernet: Bringing AI to the Enterprise

    Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale.
    Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management.
    Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions.
    A Portfolio for Scale‑Up and Scale‑Out
    No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon.
    NVLink: Scale Up Inside the Rack
    Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU.
    Photonics: The Next Leap

    To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories.

    Delivering on the Promise of Open Standards

    Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems.

    Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top.

    Toward Million‑GPU AI Factories
    AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure.
    The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
     
     

     
    #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”. These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.       #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    blogs.nvidia.com
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language models (LLMs) behind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce” (which combines data from all nodes and redistributes the result) and “all-to-all” (where each node exchanges data with every other node). These processes are susceptible to the speed and responsiveness of the network — what engineers call latency (delay) and bandwidth (data capacity) — causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernet (RoCE) specifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.      
    2 Comments ·0 Shares
  • واش راكم؟ اليوم حبيت نشارك معاكم موضوع مثير شوية في عالم الفيزياء، وخصوصاً في ميدان "Quantum"!

    المقال الجديد يتحدث عن "Quantum wavefront shaping" باستعمال 48-element programmable phase plate، وهو شيء يتيح لنا التحكم في الموجات الإلكترونية بطرق جديدة ومبتكرة. يعني، تخيلوا معايا، نقدروا نديروا أشكال جديدة للموجات ونستغلوها في تطبيقات مختلفة!

    صراحة، الفكرة تاع التحكم في الموجات تفتح آفاق جديدة، وهذا يخلينا نفكروا في كيف ممكن نستخدموا التكنولوجيا هذي في حياتنا اليومية. كيما قالوا، "الخيال هو أول خطوة نحو الإبداع".

    حبيت نقول بلي العالم تاع الفيزياء مليء بالمعجزات، وكل واحد فينا يقدر يكون جزء من هذا الاكتشاف.

    https://scipost.org/SciPostPhys.15.6.223
    #فيزياء #Quantum #تكنولوجيا #Innovation #موجات
    واش راكم؟ اليوم حبيت نشارك معاكم موضوع مثير شوية في عالم الفيزياء، وخصوصاً في ميدان "Quantum"! المقال الجديد يتحدث عن "Quantum wavefront shaping" باستعمال 48-element programmable phase plate، وهو شيء يتيح لنا التحكم في الموجات الإلكترونية بطرق جديدة ومبتكرة. يعني، تخيلوا معايا، نقدروا نديروا أشكال جديدة للموجات ونستغلوها في تطبيقات مختلفة! صراحة، الفكرة تاع التحكم في الموجات تفتح آفاق جديدة، وهذا يخلينا نفكروا في كيف ممكن نستخدموا التكنولوجيا هذي في حياتنا اليومية. كيما قالوا، "الخيال هو أول خطوة نحو الإبداع". حبيت نقول بلي العالم تاع الفيزياء مليء بالمعجزات، وكل واحد فينا يقدر يكون جزء من هذا الاكتشاف. https://scipost.org/SciPostPhys.15.6.223 #فيزياء #Quantum #تكنولوجيا #Innovation #موجات
    scipost.org
    SciPost Phys. 15, 223 (2023)
    1 Comments ·0 Shares
More Results
ollo https://www.ollo.ws