• كي كنت صغير، كنت دايمًا نسمع عن CCNP وكم يشد انتباه الناس لي عندهم هذي الشهادة. كنت نقول في نفسي: "علاش ما نحاولش؟" واليوم، جيتكم بفيديو جديد يتكلم عن "CCNP ENCOR 350-401 | مقدمة المنهاج + أجندة الشهادة".

    في الفيديو، راح نكتشف مع بعض المنهاج وكيفاش تقدر تبدأ في التحضير لها. إذا كنت تحب الشبكات مثل ما أنا نحبها، هذا الفيديو راح يعاونك بزاف في فهم الخطوات لي تليها.

    صحيح، بداية الطريق ما تكونش سهلة، لكن كي تشوف الأهداف قدامك، كل شيء يصبح ممكن. راني نشجعكم تتابعوا الفيديو وتشاركونا آراءكم.

    متنسوش تديوا فكرة من الفيديو مع أصدقائكم لي يحبو يتعلموا!

    https://www.youtube.com/watch?v=1_srk1DN0a4
    #CCNP #الشبكات #Networking #Cisco #تعليم
    👋 كي كنت صغير، كنت دايمًا نسمع عن CCNP وكم يشد انتباه الناس لي عندهم هذي الشهادة. كنت نقول في نفسي: "علاش ما نحاولش؟" واليوم، جيتكم بفيديو جديد يتكلم عن "CCNP ENCOR 350-401 | مقدمة المنهاج + أجندة الشهادة". 🎓 في الفيديو، راح نكتشف مع بعض المنهاج وكيفاش تقدر تبدأ في التحضير لها. إذا كنت تحب الشبكات مثل ما أنا نحبها، هذا الفيديو راح يعاونك بزاف في فهم الخطوات لي تليها. صحيح، بداية الطريق ما تكونش سهلة، لكن كي تشوف الأهداف قدامك، كل شيء يصبح ممكن. راني نشجعكم تتابعوا الفيديو وتشاركونا آراءكم. متنسوش تديوا فكرة من الفيديو مع أصدقائكم لي يحبو يتعلموا! 😉 https://www.youtube.com/watch?v=1_srk1DN0a4 #CCNP #الشبكات #Networking #Cisco #تعليم
    Like
    Love
    Wow
    Sad
    Angry
    383
    · 1 التعليقات ·0 المشاركات
  • Gearing Up for the Gigawatt Data Center Age

    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.
    Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game.
    This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction.
    The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.
    With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.
    The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed.
    This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack.
    The Data Center Is the Computer

    Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.
    These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”.
    These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training.
    For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.
    Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations.
    Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.
    With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years.
    For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.
    But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI.
    Spectrum‑X Ethernet: Bringing AI to the Enterprise

    Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale.
    Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management.
    Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions.
    A Portfolio for Scale‑Up and Scale‑Out
    No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon.
    NVLink: Scale Up Inside the Rack
    Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU.
    Photonics: The Next Leap

    To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories.

    Delivering on the Promise of Open Standards

    Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems.

    Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top.

    Toward Million‑GPU AI Factories
    AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure.
    The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
     
     

     
    #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”. These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.       #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    blogs.nvidia.com
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language models (LLMs) behind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce” (which combines data from all nodes and redistributes the result) and “all-to-all” (where each node exchanges data with every other node). These processes are susceptible to the speed and responsiveness of the network — what engineers call latency (delay) and bandwidth (data capacity) — causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernet (RoCE) specifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.      
    2 التعليقات ·0 المشاركات
  • Hey les amis! خبر زين جاني اليوم و حبيت نشاركو معاكم! Pokémon كيما دايمًا، جابت لنا جديد مع الحدث الكبير "PokémonXP" لي رايح يصير في "downtown San Francisco" من 28 إلى 30 أوت 2026. تخيلوا معايا، تجربة أريانا ملحمية في Championship Sunday!

    الحدث جاء مع إعلان مفاجئ في World Championships، و الباسس راح يبداو يخرجوا للبيع من 17 سبتمبر 2025. نحب نقول لكم، الأجواء راح تكون خارج عن المألوف!

    من جهة شخصية، كاين ذكريات رائعة مع أصدقائي في بطولات Pokémon، وديما نحس بالحماس و الفرح كي نلعبو سوا. ماشي غير لعبة، بل هي تجربة تحبس الأنفاس!

    مازال الوقت طويل، بصح أنا متشوق بش نعرف كيفاش رايح تكون الأجواء!

    https://www.nintendolife.com/news/2025/08/pokemon-teases-new-pokemonxp-event
    #Pokémon #SanFrancisco #
    🎉 Hey les amis! خبر زين جاني اليوم و حبيت نشاركو معاكم! Pokémon كيما دايمًا، جابت لنا جديد مع الحدث الكبير "PokémonXP" لي رايح يصير في "downtown San Francisco" من 28 إلى 30 أوت 2026. تخيلوا معايا، تجربة أريانا ملحمية في Championship Sunday! 😍 الحدث جاء مع إعلان مفاجئ في World Championships، و الباسس راح يبداو يخرجوا للبيع من 17 سبتمبر 2025. نحب نقول لكم، الأجواء راح تكون خارج عن المألوف! 🤩 من جهة شخصية، كاين ذكريات رائعة مع أصدقائي في بطولات Pokémon، وديما نحس بالحماس و الفرح كي نلعبو سوا. ماشي غير لعبة، بل هي تجربة تحبس الأنفاس! 🌟 مازال الوقت طويل، بصح أنا متشوق بش نعرف كيفاش رايح تكون الأجواء! https://www.nintendolife.com/news/2025/08/pokemon-teases-new-pokemonxp-event #Pokémon #SanFrancisco #
    www.nintendolife.com
    Taking place in "downtown San Francisco".One mystery reveal at this year's World Championships event for Pokémon was the announcement of PokémonXP.This news was revealed alongside the confirmation of an "epic arena experience" for Championship Sunday
    1 التعليقات ·0 المشاركات
  • تاع الفيشينغ راهم ضاربين مرة أخرى، وسيسكو ما عادتش بعيدة عليهم!

    المقال يتكلّم على كيفاش أنو الناس مازالوا يطيحو في الفخاخ تاع الفيشينغ، وكيفاش المنظمات تقدّر تحمي روحها. رغم كل الجهود، الهجمات هذي تتزايد، وخاصة مع تطور الأساليب.

    في رأيي، ضروري نكونوا دايماً واعيين، ونساعدو بعضنا في الحماية. أنا شخصياً، كلما قعدت مع أصحابي في الدردشة، نحكي لهم على طرق الحماية من هاذي الأساليب، وباش نتفاداو الوقوع في الفخ.

    يلزمنا نكونو حذرين، لأنو السلامة الرقمية تهمّنا كامل.

    https://arstechnica.com/security/2025/08/attackers-who-phished-cisco-downloaded-user-data-from-third-party-crm/

    #فيروس_الفيشينغ #CyberSécurité #PhishingAwareness #DigitalSafety #حماية_رقمية
    👀 تاع الفيشينغ راهم ضاربين مرة أخرى، وسيسكو ما عادتش بعيدة عليهم! 😱 المقال يتكلّم على كيفاش أنو الناس مازالوا يطيحو في الفخاخ تاع الفيشينغ، وكيفاش المنظمات تقدّر تحمي روحها. رغم كل الجهود، الهجمات هذي تتزايد، وخاصة مع تطور الأساليب. في رأيي، ضروري نكونوا دايماً واعيين، ونساعدو بعضنا في الحماية. أنا شخصياً، كلما قعدت مع أصحابي في الدردشة، نحكي لهم على طرق الحماية من هاذي الأساليب، وباش نتفاداو الوقوع في الفخ. يلزمنا نكونو حذرين، لأنو السلامة الرقمية تهمّنا كامل. https://arstechnica.com/security/2025/08/attackers-who-phished-cisco-downloaded-user-data-from-third-party-crm/ #فيروس_الفيشينغ #CyberSécurité #PhishingAwareness #DigitalSafety #حماية_رقمية
    arstechnica.com
    Stopping people from falling for phishing attacks isn't working. So what are organizations to do?
    1 التعليقات ·0 المشاركات
  • MARVEL Tōkon: Fighting Souls — 30 minutes with the 4v4 tag-fighter

    First revealed in State of Play, and recently playable at Evo Las Vegas 2025, MARVEL Tōkon: Fighting Souls, brings a fundamental shift to the tag-fighting genre. Arc System Works, Marvel Games, and PlayStation Studios have assembled to create a vibrant, stylized world, and after an intense 30-minute play session this past weekend, I’m counting down the days until I’m a Tōkon fighter again.

    Play Video

    Building a 4v4 team

    “The reason we went with 4v4 is actually because it’s something that’s never been done before in fighting games where players can switch characters,” says Kazuto Sekine, Game Director and Lead Battle Designer, Arc System Works. “We wanted to challenge ourselves to create a new tag fighter.”

    ​​

    During my session, I had access to a set of all-star Heroes to create my team of four:

    ● Doctor Doom is slow but hits hard with magical and tricky range attacks. 

    ● Ms. Marvel is quick with high-risk, high-reward attacks.

    ● Storm is an aerial threat with deadly crossovers., 

    ● Iron Man keeps enemies at bay with anti-air and other punishing moves. 

    ● Star-Lord was the most technical with his ability to switch between firearms and insane juggles.

    ● Captain America is a versatile all-rounder and a great entry point for new players.

    After some experimentation, I prioritized playing Storm and Star-Lord. I loved how their combos, personality, and flair were true to their characters. Storm’s light and medium attacks are beginner-friendly and combo well into her Quick Skill, which is a character’s unique attack you activate by pressing R2. I also found success rushing in with Star-Lord, using quick blaster and melee combos right into his Ultimate, activated simply by pressing R1.

    The control scheme is pretty straightforward. Square, Triangle, and Circle are your light, medium, and heavy attacks, respectively, while X is the assemble button. All of your special attacks and skills are reserved for the triggers. L1 enables a quick dash, L2 is a quick Assemble ability, and R1 and R2 provide your unique attack and quick skills.

    How swapping between characters works

    Traditionally, in a tag fighter, you have to rotate through your entire team before the match is over, but in Tōkon your team shares one health bar. This means you don’t have to master the whole roster to be effective, and in that sense, can approach the game like a more traditional fighter if you choose to. At the start of the match, you can only control your lead character. As the skirmish progresses, you gain the ability to switch into your assist characters when you lose a round, performing a throw, or knocking your opponent into another section of the stage. It creates an interesting dance of being careful not to give your opponents more options to use against you while making sure you access your extended roster first, and who you would want as your first backup option.

    View and download image

    Download the image

    close
    Close

    Download this image

    “Previously for allin order to play them, you had to be able to control multiple characters,” says Sekine, “However, for our game, it was important for us to design it so that you would only actually need to be able to take control of one character. You only need to learn to play as one character in order to enjoy the game, and you can still see your other teammates coming in and out of the battlefield.”

    Where you would traditionally have a dedicated button to swap between your party, here you do it during assists. Once you successfully call in an assist you have a brief window to swap to them. This exchange creates a natural swap out in the chaos of battle and some stylish moments between characters. Tōkon cares about what is happening on screen at all times, so switching between characters in the middle of a combo, standing still, or even in the air creates unique animations, such as characters giving each other daps or quipping about needing to step in. 

    Accommodating different players’ fighting styles

    As I was studying my opponent’s moves, they took a different approach, focusing on supers and trying to bring out their team for full-screen spectacles. Where each character performs a quick combo, sending your opponent airborne while smashing them to the ground, ending in what I can only describe as a superhero pose-a-thon. The methodical vs manic approach created a fun back-and-forth between us, but the game was accommodating of both approaches and provided its own sense of satisfaction.

    “When it comes to the game’s design, it was very important for us to make this something that’s easy to get into, but has depth beyond that initial entry,” says Sekine. “One thing that we were very careful about when designing the game was to ensure that there is not any kind of mechanic or attack that someone who’s just getting started would not be able to perform. It would impede on the experience of new players.” 

    “When you press the Assemble button, depending on the situation and what’s going on in the match, the Assist will come out and perform a different action that’s suitable for that particular moment,” says Sekine. “By designing it in that way that we’re able to clearly communicate to the player when they should be calling in their assists, and make it easier for them to play.”

    Anime-inspired Heroes in action

    “At Marvel games, it’s really important for us to allow developers to put their own unique stamp on the Marvel Universe,” says Michael Francisco, Sr. Product Development Manager, Marvel Games. “In the case of Arc, it’s that fusion of Marvel and American comics with Japanese anime and manga, and you can see that reflected in the art style and the character designs.” 

    All the characters ooze charm, but the backgrounds also pack a lot of exciting details. Eagle-eyed fans should keep an eye out for interesting signage referencing heroes and events, pedestrians reacting to on-screen action, and easter eggs scattered throughout. It’s obvious a lot of care was put into building this world. 

    “It was very important for us to be able to create the visual excitement that should be entailed with,” says Takeshi Yamanaka, Producer, Arc System Works. “Since this is a 4v4 game, that means that we can have up to eight characters out on the screen at one time altogether, so we were careful when creating the visual composition of the screen to ensure that we convey that excitement.”

    The 4v4 fights begin next year

    MARVEL Tōkon: Fighting Souls is set to release in 2026, and while I’m excited to see all the heroes, combinations, and worlds the game will take us too, I asked the team how they felt about creating something new that has never been done before in the fighting genre. 

    “It’s both scary and exciting, exhilarating and terrifying, at the same time,” says Francisco. “From the beginning, we all want to honor and respect the rich history and legacy of Marvel, while also forging our own path forward to create something new and innovative. So, we just hope fans are excited to see what we’ve come up with as a collaboration between all three parties.” 
    #marvel #tōkon #fighting #souls #minutes
    MARVEL Tōkon: Fighting Souls — 30 minutes with the 4v4 tag-fighter
    First revealed in State of Play, and recently playable at Evo Las Vegas 2025, MARVEL Tōkon: Fighting Souls, brings a fundamental shift to the tag-fighting genre. Arc System Works, Marvel Games, and PlayStation Studios have assembled to create a vibrant, stylized world, and after an intense 30-minute play session this past weekend, I’m counting down the days until I’m a Tōkon fighter again. Play Video Building a 4v4 team “The reason we went with 4v4 is actually because it’s something that’s never been done before in fighting games where players can switch characters,” says Kazuto Sekine, Game Director and Lead Battle Designer, Arc System Works. “We wanted to challenge ourselves to create a new tag fighter.” ​​ During my session, I had access to a set of all-star Heroes to create my team of four: ● Doctor Doom is slow but hits hard with magical and tricky range attacks.  ● Ms. Marvel is quick with high-risk, high-reward attacks. ● Storm is an aerial threat with deadly crossovers.,  ● Iron Man keeps enemies at bay with anti-air and other punishing moves.  ● Star-Lord was the most technical with his ability to switch between firearms and insane juggles. ● Captain America is a versatile all-rounder and a great entry point for new players. After some experimentation, I prioritized playing Storm and Star-Lord. I loved how their combos, personality, and flair were true to their characters. Storm’s light and medium attacks are beginner-friendly and combo well into her Quick Skill, which is a character’s unique attack you activate by pressing R2. I also found success rushing in with Star-Lord, using quick blaster and melee combos right into his Ultimate, activated simply by pressing R1. The control scheme is pretty straightforward. Square, Triangle, and Circle are your light, medium, and heavy attacks, respectively, while X is the assemble button. All of your special attacks and skills are reserved for the triggers. L1 enables a quick dash, L2 is a quick Assemble ability, and R1 and R2 provide your unique attack and quick skills. How swapping between characters works Traditionally, in a tag fighter, you have to rotate through your entire team before the match is over, but in Tōkon your team shares one health bar. This means you don’t have to master the whole roster to be effective, and in that sense, can approach the game like a more traditional fighter if you choose to. At the start of the match, you can only control your lead character. As the skirmish progresses, you gain the ability to switch into your assist characters when you lose a round, performing a throw, or knocking your opponent into another section of the stage. It creates an interesting dance of being careful not to give your opponents more options to use against you while making sure you access your extended roster first, and who you would want as your first backup option. View and download image Download the image close Close Download this image “Previously for allin order to play them, you had to be able to control multiple characters,” says Sekine, “However, for our game, it was important for us to design it so that you would only actually need to be able to take control of one character. You only need to learn to play as one character in order to enjoy the game, and you can still see your other teammates coming in and out of the battlefield.” Where you would traditionally have a dedicated button to swap between your party, here you do it during assists. Once you successfully call in an assist you have a brief window to swap to them. This exchange creates a natural swap out in the chaos of battle and some stylish moments between characters. Tōkon cares about what is happening on screen at all times, so switching between characters in the middle of a combo, standing still, or even in the air creates unique animations, such as characters giving each other daps or quipping about needing to step in.  Accommodating different players’ fighting styles As I was studying my opponent’s moves, they took a different approach, focusing on supers and trying to bring out their team for full-screen spectacles. Where each character performs a quick combo, sending your opponent airborne while smashing them to the ground, ending in what I can only describe as a superhero pose-a-thon. The methodical vs manic approach created a fun back-and-forth between us, but the game was accommodating of both approaches and provided its own sense of satisfaction. “When it comes to the game’s design, it was very important for us to make this something that’s easy to get into, but has depth beyond that initial entry,” says Sekine. “One thing that we were very careful about when designing the game was to ensure that there is not any kind of mechanic or attack that someone who’s just getting started would not be able to perform. It would impede on the experience of new players.”  “When you press the Assemble button, depending on the situation and what’s going on in the match, the Assist will come out and perform a different action that’s suitable for that particular moment,” says Sekine. “By designing it in that way that we’re able to clearly communicate to the player when they should be calling in their assists, and make it easier for them to play.” Anime-inspired Heroes in action “At Marvel games, it’s really important for us to allow developers to put their own unique stamp on the Marvel Universe,” says Michael Francisco, Sr. Product Development Manager, Marvel Games. “In the case of Arc, it’s that fusion of Marvel and American comics with Japanese anime and manga, and you can see that reflected in the art style and the character designs.”  All the characters ooze charm, but the backgrounds also pack a lot of exciting details. Eagle-eyed fans should keep an eye out for interesting signage referencing heroes and events, pedestrians reacting to on-screen action, and easter eggs scattered throughout. It’s obvious a lot of care was put into building this world.  “It was very important for us to be able to create the visual excitement that should be entailed with,” says Takeshi Yamanaka, Producer, Arc System Works. “Since this is a 4v4 game, that means that we can have up to eight characters out on the screen at one time altogether, so we were careful when creating the visual composition of the screen to ensure that we convey that excitement.” The 4v4 fights begin next year MARVEL Tōkon: Fighting Souls is set to release in 2026, and while I’m excited to see all the heroes, combinations, and worlds the game will take us too, I asked the team how they felt about creating something new that has never been done before in the fighting genre.  “It’s both scary and exciting, exhilarating and terrifying, at the same time,” says Francisco. “From the beginning, we all want to honor and respect the rich history and legacy of Marvel, while also forging our own path forward to create something new and innovative. So, we just hope fans are excited to see what we’ve come up with as a collaboration between all three parties.”  #marvel #tōkon #fighting #souls #minutes
    MARVEL Tōkon: Fighting Souls — 30 minutes with the 4v4 tag-fighter
    blog.playstation.com
    First revealed in State of Play, and recently playable at Evo Las Vegas 2025, MARVEL Tōkon: Fighting Souls, brings a fundamental shift to the tag-fighting genre. Arc System Works, Marvel Games, and PlayStation Studios have assembled to create a vibrant, stylized world, and after an intense 30-minute play session this past weekend, I’m counting down the days until I’m a Tōkon fighter again. Play Video Building a 4v4 team “The reason we went with 4v4 is actually because it’s something that’s never been done before in fighting games where players can switch characters,” says Kazuto Sekine, Game Director and Lead Battle Designer, Arc System Works. “We wanted to challenge ourselves to create a new tag fighter.” ​​ During my session, I had access to a set of all-star Heroes to create my team of four: ● Doctor Doom is slow but hits hard with magical and tricky range attacks.  ● Ms. Marvel is quick with high-risk, high-reward attacks. ● Storm is an aerial threat with deadly crossovers.,  ● Iron Man keeps enemies at bay with anti-air and other punishing moves.  ● Star-Lord was the most technical with his ability to switch between firearms and insane juggles. ● Captain America is a versatile all-rounder and a great entry point for new players. After some experimentation, I prioritized playing Storm and Star-Lord. I loved how their combos, personality, and flair were true to their characters (at one point, Storm sternly refers to Star-Lord as “Quill” when he’s goofing off, which I adored). Storm’s light and medium attacks are beginner-friendly and combo well into her Quick Skill, which is a character’s unique attack you activate by pressing R2. I also found success rushing in with Star-Lord, using quick blaster and melee combos right into his Ultimate, activated simply by pressing R1. The control scheme is pretty straightforward. Square, Triangle, and Circle are your light, medium, and heavy attacks, respectively, while X is the assemble button. All of your special attacks and skills are reserved for the triggers. L1 enables a quick dash, L2 is a quick Assemble ability, and R1 and R2 provide your unique attack and quick skills. How swapping between characters works Traditionally, in a tag fighter, you have to rotate through your entire team before the match is over, but in Tōkon your team shares one health bar. This means you don’t have to master the whole roster to be effective, and in that sense, can approach the game like a more traditional fighter if you choose to. At the start of the match, you can only control your lead character. As the skirmish progresses, you gain the ability to switch into your assist characters when you lose a round, performing a throw, or knocking your opponent into another section of the stage. It creates an interesting dance of being careful not to give your opponents more options to use against you while making sure you access your extended roster first, and who you would want as your first backup option. View and download image Download the image close Close Download this image “Previously for all [tag fighters] in order to play them, you had to be able to control multiple characters,” says Sekine, “However, for our game, it was important for us to design it so that you would only actually need to be able to take control of one character. You only need to learn to play as one character in order to enjoy the game, and you can still see your other teammates coming in and out of the battlefield.” Where you would traditionally have a dedicated button to swap between your party, here you do it during assists. Once you successfully call in an assist you have a brief window to swap to them. This exchange creates a natural swap out in the chaos of battle and some stylish moments between characters. Tōkon cares about what is happening on screen at all times, so switching between characters in the middle of a combo, standing still, or even in the air creates unique animations, such as characters giving each other daps or quipping about needing to step in.  Accommodating different players’ fighting styles As I was studying my opponent’s moves, they took a different approach, focusing on supers and trying to bring out their team for full-screen spectacles. Where each character performs a quick combo, sending your opponent airborne while smashing them to the ground, ending in what I can only describe as a superhero pose-a-thon. The methodical vs manic approach created a fun back-and-forth between us, but the game was accommodating of both approaches and provided its own sense of satisfaction. “When it comes to the game’s design, it was very important for us to make this something that’s easy to get into, but has depth beyond that initial entry,” says Sekine. “One thing that we were very careful about when designing the game was to ensure that there is not any kind of mechanic or attack that someone who’s just getting started would not be able to perform. It would impede on the experience of new players.”  “When you press the Assemble button, depending on the situation and what’s going on in the match, the Assist will come out and perform a different action that’s suitable for that particular moment,” says Sekine. “By designing it in that way that we’re able to clearly communicate to the player when they should be calling in their assists, and make it easier for them to play.” Anime-inspired Heroes in action “At Marvel games, it’s really important for us to allow developers to put their own unique stamp on the Marvel Universe,” says Michael Francisco, Sr. Product Development Manager, Marvel Games. “In the case of Arc, it’s that fusion of Marvel and American comics with Japanese anime and manga, and you can see that reflected in the art style and the character designs.”  All the characters ooze charm, but the backgrounds also pack a lot of exciting details. Eagle-eyed fans should keep an eye out for interesting signage referencing heroes and events, pedestrians reacting to on-screen action, and easter eggs scattered throughout. It’s obvious a lot of care was put into building this world.  “It was very important for us to be able to create the visual excitement that should be entailed with [tag fighters],” says Takeshi Yamanaka, Producer, Arc System Works. “Since this is a 4v4 game, that means that we can have up to eight characters out on the screen at one time altogether, so we were careful when creating the visual composition of the screen to ensure that we convey that excitement.” The 4v4 fights begin next year MARVEL Tōkon: Fighting Souls is set to release in 2026, and while I’m excited to see all the heroes, combinations, and worlds the game will take us too, I asked the team how they felt about creating something new that has never been done before in the fighting genre.  “It’s both scary and exciting, exhilarating and terrifying, at the same time,” says Francisco. “From the beginning, we all want to honor and respect the rich history and legacy of Marvel, while also forging our own path forward to create something new and innovative. So, we just hope fans are excited to see what we’ve come up with as a collaboration between all three parties.” 
    2 التعليقات ·0 المشاركات
ollo https://www.ollo.ws