• يا جماعة، شفتوا واش صار في Seoul؟ فناننا Adrián Villar Rojas قام بتغيير مركز Sonje للفن إلى إيكوسيستم يتفكك! الموضوع مش عادي، كل زوايا المتحف مغمورة بلغة العدو، حتى في الممرات والحمامات.

    الفنان جاب فكرة جديدة، وين يخلينا نفكر في العلاقة بين المكان وتفاعلاتنا. بصح، يجبدك للتفكر في كيفية تأثير البيئة على الفكر والفن. شخصياً، كي شفت الخدمة هذي، حسيت بضغط الأفكار، وخلاني نطرح علامات استفهام على كيفاش نعيشوا في هالزمن.

    بصراحة، المشاريع الفنية كيف هذه تزيدنا وعي وتفكير في علاقاتنا مع المحيط. شكون يعرف، يمكن نكتشفوا وقع الفن في حياتنا اليومية!

    https://www.designboom.com/art/adrian-villar-rojas-seoul-art-sonje-center-decomposing-ecosystem-language-enemy-09-06-2025/

    #فن #Art #Ecosystem #تجديد #Seoul
    يا جماعة، شفتوا واش صار في Seoul؟ 🇰🇷 فناننا Adrián Villar Rojas قام بتغيير مركز Sonje للفن إلى إيكوسيستم يتفكك! الموضوع مش عادي، كل زوايا المتحف مغمورة بلغة العدو، حتى في الممرات والحمامات. 😲 الفنان جاب فكرة جديدة، وين يخلينا نفكر في العلاقة بين المكان وتفاعلاتنا. بصح، يجبدك للتفكر في كيفية تأثير البيئة على الفكر والفن. شخصياً، كي شفت الخدمة هذي، حسيت بضغط الأفكار، وخلاني نطرح علامات استفهام على كيفاش نعيشوا في هالزمن. بصراحة، المشاريع الفنية كيف هذه تزيدنا وعي وتفكير في علاقاتنا مع المحيط. شكون يعرف، يمكن نكتشفوا وقع الفن في حياتنا اليومية! https://www.designboom.com/art/adrian-villar-rojas-seoul-art-sonje-center-decomposing-ecosystem-language-enemy-09-06-2025/ #فن #Art #Ecosystem #تجديد #Seoul
    adrián villar rojas transforms seoul’s art sonje center into a decomposing ecosystem
    www.designboom.com
    the language of the enemy overtakes all four floors of the museum, including corridors, stairwells, restrooms, and peripheral spaces.  The post adrián villar rojas transforms seoul’s art sonje center into a decomposing ecosystem appeared first
    Like
    Love
    Wow
    Sad
    Angry
    945
    · 1 Commentaires ·0 Parts
  • NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry.
    Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device.
    This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics.

    Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments.
    “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.”
    Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device.
    Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models.
    A Giant Leap for Real-Time Robot Reasoning
    Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency.
    Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally.
    NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization.
    With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases.
    Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing.
    With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams.
    Jetson Thor Set to Advance Research Innovation 
    Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications.
    At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue.
    “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”
    Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets.
    Wield the Strength of Jetson Thor
    The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply.
    NVIDIA Jetson AGX Thor Developer Kit
    The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors.
    Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency.
    Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio.
    More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough.

    To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face.
    The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners.
    NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September. #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    blogs.nvidia.com
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc. (ADI), e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. NVIDIA Jetson T5000 modules are available starting at $2,999 for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    Like
    Love
    Wow
    Sad
    Angry
    797
    · 2 Commentaires ·0 Parts
  • Gaming Meets Streaming: Inside the Shift

    After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams.Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate billion in revenue. By 2030, that figure is expected to reach billion, growing at an annual rate of 4.32%.The average revenue per userin 2025 stands at showing consistent monetization across platforms.China remains the single largest market, expected to bring in billion this year alone.
    #gaming #meets #streaming #inside #shift
    Gaming Meets Streaming: Inside the Shift
    After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams.Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate billion in revenue. By 2030, that figure is expected to reach billion, growing at an annual rate of 4.32%.The average revenue per userin 2025 stands at showing consistent monetization across platforms.China remains the single largest market, expected to bring in billion this year alone. #gaming #meets #streaming #inside #shift
    Gaming Meets Streaming: Inside the Shift
    80.lv
    After a long, busy day, you boot up your gaming device but don’t quite feel like diving into an intense session. Instead, you open a broadcast of one of your favorite streamers and spend the evening laughing at commentary, reacting to unexpected moments, and just enjoying your time with fellow gamers. Sounds familiar?This everyday scenario perfectly captures the way live streaming platforms like Twitch, YouTube Gaming, or Kick have transformed the gaming experience — turning gameplay into shared moments where gamers broadcast in real-time while viewers watch, chat, learn, and discover new titles.What started as friends sharing gameplay clips has exploded into a multi-billion-dollar ecosystem where streamers are popular creators, viewers build communities around shared experiences, and watching games has become as popular as playing them. But how did streaming become such a powerful force in gaming – and what does it mean for players, creators, and the industry alike? Let’s find out!Why Do Gamers Love Streaming?So why are millions of gamers spending hours every week watching others play instead of jumping into a game themselves? The answer isn’t just one thing – it’s a mix of entertainment, learning, connection, and discovery that makes live streaming uniquely compelling. Let’s break it down.Entertainment at Your Own PaceSometimes, you just want to relax. Maybe you’re too mentally drained to queue up for ranked matches or start that complex RPG quest. Streaming offers the perfect low-effort alternative – the fun of gaming without needing to press a single button. Whether it's high-stakes gameplay, hilarious commentary, or unpredictable in-game chaos, streams let you enjoy all the excitement while kicking back on the couch, grabbing a snack, or chatting in the background.Learning and Skill DevelopmentStreaming isn’t just for laughs – it’s also one of the best ways to level up your own gameplay. Watching a skilled streamer handle a tricky boss fight, execute high-level strategies, or master a game’s mechanics can teach you far more than a dry tutorial ever could. Many gamers tune in specifically to study routes, tactics, builds, or even to understand if a game suits their playstyle before buying it. Think of it as education, but way more fun.Social Connection and CommunityOne of the most powerful draws of live streaming is the sense of community. Jumping into a stream isn’t like watching TV – it’s like entering a room full of people who love the same games you do. Chatting with fellow viewers, sharing reactions in real-time, tossing emotes into the chaos, and getting shoutouts from the streamer – it all creates a sense of belonging. For many, it’s a go-to social space where friendships, inside jokes, and even fandoms grow.Discovery of New Games and TrendsEver found a game you now love just because you saw a streamer play it? You’re not alone. Streaming has become a major discovery engine in gaming. Watching creators try new releases, revisit cult classics, or spotlight lesser-known indies helps players find titles they might never encounter on their own. Sometimes, entire genres or games blow up because of a few well-timed streams (Among Us, Vampire Survivors, Only Up! – all made big by streamers).Together, these draws have sparked a whole new kind of culture – gaming communities with their own languages, celebrities, and shared rituals.Inside Streaming CultureStreaming has created something unique in gaming: genuine relationships between creators and audiences who've never met. When Asmongold reacts to the latest releases or penguinz0 delivers his signature deadpan commentary, millions of viewers don't just watch – they feel like they're hanging out with a friend. These streamers have become trusted voices whose opinions carry real weight, making gaming fame more accessible than ever. Anyone with personality and dedication can build a loyal following and become a cultural influencer.If you've ever watched a Twitch stream, you've witnessed chat culture in action – a chaotic river of emotes, inside jokes, and reactions that somehow make perfect sense to regulars. "KEKW" expresses laughter, "Poggers" shows excitement, and memes spread like wildfire across communities. The chat itself becomes entertainment, with viewers competing to land the perfect reaction at just the right moment. These expressions often escape their stream origins, becoming part of the broader gaming vocabulary.For many viewers, streams have become part of their daily routine – tuning in at the same time, celebrating milestones, or witnessing historic gaming moments together. When a streamer finally beats that impossible boss, the entire community shares in the victory. These aren't just individual entertainment experiences — they're collective memories where thousands can say "I was there when it happened," creating communities that extend far beyond gaming itself.How Streamers Are Reshaping the Gaming IndustryWhile players tune in for fun and connection, behind the scenes, streaming is quietly reshaping how the gaming industry approaches everything from marketing to game design. What started as casual gameplay broadcasts is now influencing major decisions across studios and publishers.The New Marketing Powerhouse. Traditional game reviews and advertising have taken a backseat to streamer influence. A single popular creator playing your game can generate millions of views and drive massive sales overnight – just look at how Among Us exploded after a few key streamers discovered it, or how Fall Guys became a phenomenon through streaming momentum. Publishers now prioritize getting their games into the hands of influential streamers on launch day, knowing that authentic gameplay footage and reactions carry more weight than any trailer or review. Day-one streaming success has become make-or-break for many titles.Designing for the Stream. Developers are now creating games with streaming in mind. Modern titles include built-in streaming tools, spectator-friendly interfaces, and features that encourage viewer interaction like chat integration and voting systems. Games are designed to be visually clear and exciting to watch, not just play. Some developers even create "streamer modes" that remove copyrighted music or add special features for streamers. The rise of streaming has birthed entirely new genres — party games, reaction-heavy horror titles, and social deduction games all thrive because they're inherently entertaining to watch.The Creator Economy Boom. Streaming has created entirely new career paths and revenue streams within gaming. Successful streamers earn through donations, subscriptions, brand partnerships, and revenue sharing from platform-specific features like Twitch bits or YouTube Super Chat. This has spawned a massive creator economy where top streamers command six-figure sponsorship deals, while publishers allocate significant budgets to influencer partnerships rather than traditional advertising. The rise of streaming has also fueled the growth of esports, where pro players double as entertainers – drawing massive online audiences and blurring the line between competition and content.Video Game Streaming in NumbersWhile it’s easy to feel the impact of streaming in daily gaming life, the numbers behind the trend tell an even more powerful story. From billions in revenue to global shifts in viewer behavior, game streaming has grown into a massive industry reshaping how we play, watch, and connect. Here’s a look at the data driving the movement.Market Size & GrowthIn 2025, the global Games Live Streaming market is projected to generate $15.32 billion in revenue. By 2030, that figure is expected to reach $18.92 billion, growing at an annual rate of 4.32%.The average revenue per user (ARPU) in 2025 stands at $10.51, showing consistent monetization across platforms.China remains the single largest market, expected to bring in $2.92 billion this year alone.Source: Statista Market Insights, 2025Viewership & Daily HabitsThe number of users in the live game streaming market is forecast to hit 1.8 billion by 2030, with user penetration rising from 18.6% in 2025 to 22.6% by the end of the decade.In 2023, average daily time spent watching game streams rose to 2.5 hours per user, up 12% year-over-year — a clear sign of streaming becoming part of gamers’ daily routines.Sources: Statista Market Insights, 2025; SNS Insider, 2024What People Are WatchingThe most-watched games on Twitch include League of Legends, GTA V, and Counter-Strike — all regularly topping charts for both viewers and streamers.When it comes to creators, the most-streamed games are Fortnite, Valorant, and Call of Duty: Warzone, showing a strong overlap between what streamers love to broadcast and what audiences enjoy watching.In Q1 2024, Twitch users spent over 249 million hours watching new game releases, while total gaming-related content reached around 3.3 billion hours.Sources: SullyGnome, 2025; Statista, 2025Global Trends & Regional PlatformsChina’s local platforms like Huya (31M MAU) and Douyu (26.6M MAU) remain key players in the domestic market.In South Korea, following Twitch’s 2023 exit, local services like AfreecaTV and newcomer Chzzk have positioned themselves as alternatives.Meanwhile, Japan and Europe continue to see steady engagement driven by strong gaming scenes and dedicated fan communities.Source: Statista, 2025Event Livestreaming Hits New HighsNintendo Direct was the most-watched gaming showcase in 2024, with an average minute audience of 2.6 million.The 2024 Streamer Awards drew over 645,000 peak viewers, highlighting how creator-focused events now rival traditional game showcases.Source: Statista, 2025As game streaming continues to evolve, its role in the broader gaming ecosystem is becoming clearer. It hasn’t replaced traditional gameplay – instead, it’s added a new dimension to how people engage with games, offering a space for connection, discovery, and commentary. For players, creators, and industry leaders alike, streaming now sits alongside playing as a core part of the modern gaming experience – one that continues to grow and shift with the industry itself.
    Like
    Love
    Wow
    Sad
    Angry
    615
    · 2 Commentaires ·0 Parts
  • Gearing Up for the Gigawatt Data Center Age

    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.
    Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game.
    This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction.
    The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.
    With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.
    The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed.
    This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack.
    The Data Center Is the Computer

    Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.
    These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”.
    These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training.
    For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.
    Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations.
    Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.
    With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years.
    For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.
    But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI.
    Spectrum‑X Ethernet: Bringing AI to the Enterprise

    Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale.
    Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management.
    Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions.
    A Portfolio for Scale‑Up and Scale‑Out
    No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon.
    NVLink: Scale Up Inside the Rack
    Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU.
    Photonics: The Next Leap

    To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories.

    Delivering on the Promise of Open Standards

    Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems.

    Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top.

    Toward Million‑GPU AI Factories
    AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure.
    The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
     
     

     
    #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”. These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.       #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    blogs.nvidia.com
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language models (LLMs) behind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce” (which combines data from all nodes and redistributes the result) and “all-to-all” (where each node exchanges data with every other node). These processes are susceptible to the speed and responsiveness of the network — what engineers call latency (delay) and bandwidth (data capacity) — causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernet (RoCE) specifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.      
    2 Commentaires ·0 Parts
  • يا جماعة، مرحبا بكم في جولة جديدة من Clojure Deref!

    اليوم، حبيت نشارك معاكم ملخص لمقال جديد يتحدث عن آخر الأخبار والتحديثات في عالم Clojure. المقال فيه مجموعة من الروابط، الفيديوهات، ومواضيع مثيرة في الـ ecosystem. من البودكاستات إلى الأدوات الجديدة، كاين الكثير من المحتوى اللي يحمس!

    شوفوا، تعلمت من خلال تجربتي في Clojure كيفاش هاد اللغة قادرة تفتح لك أبواب جديدة في تطوير البرمجيات. كل مرة نكتشف فيها أدوات جديدة، نحس برغبة أكبر في التعلم والابتكار.

    إن شاء الله تلقوا فيه معلومات فائدة وتحمسكم كثر من ما تحمستنا. خلوكم ديما متابعين!

    https://clojure.org/news/2023/09/22/deref

    #Clojure #Développement #Programmation #Tech #Innovation
    يا جماعة، مرحبا بكم في جولة جديدة من Clojure Deref! 🌟 اليوم، حبيت نشارك معاكم ملخص لمقال جديد يتحدث عن آخر الأخبار والتحديثات في عالم Clojure. المقال فيه مجموعة من الروابط، الفيديوهات، ومواضيع مثيرة في الـ ecosystem. من البودكاستات إلى الأدوات الجديدة، كاين الكثير من المحتوى اللي يحمس! شوفوا، تعلمت من خلال تجربتي في Clojure كيفاش هاد اللغة قادرة تفتح لك أبواب جديدة في تطوير البرمجيات. كل مرة نكتشف فيها أدوات جديدة، نحس برغبة أكبر في التعلم والابتكار. إن شاء الله تلقوا فيه معلومات فائدة وتحمسكم كثر من ما تحمستنا. خلوكم ديما متابعين! https://clojure.org/news/2023/09/22/deref #Clojure #Développement #Programmation #Tech #Innovation
    clojure.org
    Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS). Thanks to Anton Fonarev for link aggregation. Podcasts and videos 48: Biff with Jacob O’Bryant - The REPL Joyful Mobile Devel
    1 Commentaires ·0 Parts
  • hey tout le monde! اليوم جبتلكم خبر خاص لعشاق Clojure!

    المقال اللي جبتلكم عليه بعنوان "Clojure Deref" سيعرفكم على آخر الأخبار والمستجدات في عالم Clojure. فيه مجموعة من البودكاست، الفيديوهات، والمقالات الجديدة اللي تهم كل مطور ومهتم بالـ ecosystem هذا. من تجارب جديدة في الـ Physics Simulation حتى تحديثات على Libraries و Tools، كل شيء موجود!

    شخصياً، أنا دائماً أستمتع بتجربة الحاجات الجديدة في Clojure، المرة الماضية جربت واحد من الأدوات الجديدة وكان رائع! إذا كنت مهتم، لا تفوت الفرصة واستمتع بالتعلم والتطوير!

    المحتوى هذا يفتح لك أبواب جديدة في البرمجة ويشجعك تكون دائما في الطليعة.

    https://clojure.org/news/2023/07/07/deref
    #Clojure #برمجة #Innovation #TechNews #Développement
    🌟 hey tout le monde! اليوم جبتلكم خبر خاص لعشاق Clojure! 🚀 المقال اللي جبتلكم عليه بعنوان "Clojure Deref" سيعرفكم على آخر الأخبار والمستجدات في عالم Clojure. فيه مجموعة من البودكاست، الفيديوهات، والمقالات الجديدة اللي تهم كل مطور ومهتم بالـ ecosystem هذا. من تجارب جديدة في الـ Physics Simulation حتى تحديثات على Libraries و Tools، كل شيء موجود! شخصياً، أنا دائماً أستمتع بتجربة الحاجات الجديدة في Clojure، المرة الماضية جربت واحد من الأدوات الجديدة وكان رائع! إذا كنت مهتم، لا تفوت الفرصة واستمتع بالتعلم والتطوير! المحتوى هذا يفتح لك أبواب جديدة في البرمجة ويشجعك تكون دائما في الطليعة. https://clojure.org/news/2023/07/07/deref #Clojure #برمجة #Innovation #TechNews #Développement
    clojure.org
    Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem. (@ClojureDeref RSS) Podcasts and videos Live-editable Physics Simulation - Sam Ritchie FlowStorm searching and following values - Juan Monetta
    Like
    Love
    Wow
    Sad
    Angry
    108
    · 1 Commentaires ·0 Parts
  • سلام يا جماعة!

    حبيت اليوم نشارك معاكم حول "Clojure Deref" الجديد، اللي يخص الـClojure ecosystem. هذا ملخص أسبوعي يجمعلكم كل الأخبار والروابط المهمة، من دروس و Podcasts وأدوات جديدة.

    الكثير من المواضيع المثيرة هنا، من تعلم الـRecursion وحتى تحديثات المشاريع طويلة الأمد. من بين الأدوات الجديدة، كاين muuntaja، مكتبة تسهل عليك التعامل مع HTTP APIs.

    شخصياً، كاين حاجة نحب نقولها: لما تتعلم حاجة جديدة في البرمجة، حاول دائمًا تطبيقها في مشاريع صغيرة! هكذا تقدر توصل للمعلومات بفاعلية أكبر.

    زيدو شوفو المقال وتعمقوا فيه، فالأفكار الجديدة تنتظرنا!

    https://clojure.org/news/2024/03/22/deref

    #Clojure #Développement #Programming #TechNews #Informatique
    🚀 سلام يا جماعة! حبيت اليوم نشارك معاكم حول "Clojure Deref" الجديد، اللي يخص الـClojure ecosystem. هذا ملخص أسبوعي يجمعلكم كل الأخبار والروابط المهمة، من دروس و Podcasts وأدوات جديدة. الكثير من المواضيع المثيرة هنا، من تعلم الـRecursion وحتى تحديثات المشاريع طويلة الأمد. من بين الأدوات الجديدة، كاين muuntaja، مكتبة تسهل عليك التعامل مع HTTP APIs. شخصياً، كاين حاجة نحب نقولها: لما تتعلم حاجة جديدة في البرمجة، حاول دائمًا تطبيقها في مشاريع صغيرة! هكذا تقدر توصل للمعلومات بفاعلية أكبر. زيدو شوفو المقال وتعمقوا فيه، فالأفكار الجديدة تنتظرنا! https://clojure.org/news/2024/03/22/deref #Clojure #Développement #Programming #TechNews #Informatique
    clojure.org
    Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem (feed: RSS). Thanks to Anton Fonarev for link aggregation. Podcasts and videos Bringing Real-Time AI to Phone Calls using core.async (by Ovi Stoic
    1 Commentaires ·0 Parts
  • الحياة مليئة بالتفاصيل الصغيرة، وإذا عشناها بعمق، نكتشف عوالم جديدة!

    اليوم جبتلكم "Clojure Deref" - ملخص أسبوعي عن كل ما يتعلق بـ Clojure. المقال يحتوي على مجموعة من البودكاستات، فيديوهات وأخبار مختلفة، وكأنك تطلع على آخر تطورات الـ ecosystem. من دبي البودكاست المميز لـ Nubank، لورشات العمل حول هياكل البيانات غير القابلة للتغيير.

    شخصياً، أنا دايماً متشوق للابتكارات الجديدة في هذا العالم. كل مرة نكتشف أداة أو مكتبة جديدة، كأننا نخطو خطوة أخرى نحو فهم أعمق للغة. الأمل في الابتكار مستمر، ومن هنا نقدر نبدأ رحلتنا نحو التطور.

    هذا المحتوى يشجعنا على التفكير في كيف نقدر نستفيد أكثر من التكنولوجيات الجديدة ونطبقها في مشاريعنا.

    https://clojure.org/news/2023/04/10/deref
    #Clojure #برمجة #تكنولوجيا #Innovation #Développement
    🌟 الحياة مليئة بالتفاصيل الصغيرة، وإذا عشناها بعمق، نكتشف عوالم جديدة! 💡 اليوم جبتلكم "Clojure Deref" - ملخص أسبوعي عن كل ما يتعلق بـ Clojure. المقال يحتوي على مجموعة من البودكاستات، فيديوهات وأخبار مختلفة، وكأنك تطلع على آخر تطورات الـ ecosystem. من دبي البودكاست المميز لـ Nubank، لورشات العمل حول هياكل البيانات غير القابلة للتغيير. ✨ شخصياً، أنا دايماً متشوق للابتكارات الجديدة في هذا العالم. كل مرة نكتشف أداة أو مكتبة جديدة، كأننا نخطو خطوة أخرى نحو فهم أعمق للغة. الأمل في الابتكار مستمر، ومن هنا نقدر نبدأ رحلتنا نحو التطور. هذا المحتوى يشجعنا على التفكير في كيف نقدر نستفيد أكثر من التكنولوجيات الجديدة ونطبقها في مشاريعنا. https://clojure.org/news/2023/04/10/deref #Clojure #برمجة #تكنولوجيا #Innovation #Développement
    clojure.org
    Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem. (@ClojureDeref RSS) Podcasts and videos A stylish debut: Nubank’s new tech podcast invites Vitor Olivier, the company CTO - The Hammock by Buildi
    1 Commentaires ·0 Parts
  • يا جماعة، شكون فيكم سمع عن Faraday Future؟ هيا طحت علينا بخبر يخرج عن المألوف!

    الشركة الأمريكية هاذي أطلقت استراتيجية جديدة اسمها “EAI + Crypto”، وهي عبارة عن نظام ثوري يدمج بين الذكاء الاصطناعي وWeb3. يعني ببساطة، راح تولي عندنا سيارات ذكية تتعامل مع العملات الرقمية، وكأننا في فيلم خيال علمي!

    شخصيًا، نشوف أن هاذي التطورات هي خطوة مهمة نحو مستقبل تنقل أفضل، كيفاش نقدروا نربطوا حياتنا اليومية بالتكنولوجيا الحديثة، كيما نستخدم الهاتف الذكي في كل شيء.

    فما بالك بكون عندنا سيارات تتفاعل مع مجتمعاتنا الرقمية؟ هذا راح يفتح لنا آفاق جديدة!

    https://www.globenewswire.com/news-release/2025/08/18/3134677/0/en/Faraday-Future-Launches-its-EAI-Crypto-Dual-Flywheel-Dual-Bridge-Ecosystem-Strategy-as-a-Pioneer-in-AI-mobility
    🌟 يا جماعة، شكون فيكم سمع عن Faraday Future؟ هيا طحت علينا بخبر يخرج عن المألوف! 🚗💨 الشركة الأمريكية هاذي أطلقت استراتيجية جديدة اسمها “EAI + Crypto”، وهي عبارة عن نظام ثوري يدمج بين الذكاء الاصطناعي وWeb3. يعني ببساطة، راح تولي عندنا سيارات ذكية تتعامل مع العملات الرقمية، وكأننا في فيلم خيال علمي! 🤖✨ شخصيًا، نشوف أن هاذي التطورات هي خطوة مهمة نحو مستقبل تنقل أفضل، كيفاش نقدروا نربطوا حياتنا اليومية بالتكنولوجيا الحديثة، كيما نستخدم الهاتف الذكي في كل شيء. فما بالك بكون عندنا سيارات تتفاعل مع مجتمعاتنا الرقمية؟ هذا راح يفتح لنا آفاق جديدة! https://www.globenewswire.com/news-release/2025/08/18/3134677/0/en/Faraday-Future-Launches-its-EAI-Crypto-Dual-Flywheel-Dual-Bridge-Ecosystem-Strategy-as-a-Pioneer-in-AI-mobility
    www.globenewswire.com
    PEBBLE BEACH, Calif., Aug. 17, 2025 (GLOBE NEWSWIRE) -- Faraday Future Intelligent Electric Inc. (NASDAQ: FFAI) (“Faraday Future”, “FF” or “Company”), a California-based global shared intelligent electric mobility ecosystem company, announced that
    1 Commentaires ·0 Parts
  • كيما العادة، عندنا جديد في كلوژر!

    المقال اليوم يتحدث عن "Clojure Deref"، وين تلقاو الأخبار والتحديثات الأسبوعية للـ Clojure ecosystem. المثير هو أنو كاين صعوبات في العثور على وظائف كلوژر، لكن الحقيقة أن هناك فرص موجودة، غير أنه بعض الأحيان كاين تفاوت في الخبرة أو المناطق الجغرافية.

    شخصياً، تجربتي مع كلوژر كانت رائعة، لكن فعلاً، فرص العمل تحتاج لمزيد من الترويج. إذا كنت من محبي البرمجة، هذا هو الوقت المناسب لتكثف البحث!

    خليكم على اطلاع، فالتحديثات قادمة كل يوم!

    https://clojure.org/news/2021/06/25/deref
    #Clojure #برمجة #Développement #TechNews #فرص عمل
    🌟 كيما العادة، عندنا جديد في كلوژر! 🌟 المقال اليوم يتحدث عن "Clojure Deref"، وين تلقاو الأخبار والتحديثات الأسبوعية للـ Clojure ecosystem. المثير هو أنو كاين صعوبات في العثور على وظائف كلوژر، لكن الحقيقة أن هناك فرص موجودة، غير أنه بعض الأحيان كاين تفاوت في الخبرة أو المناطق الجغرافية. شخصياً، تجربتي مع كلوژر كانت رائعة، لكن فعلاً، فرص العمل تحتاج لمزيد من الترويج. إذا كنت من محبي البرمجة، هذا هو الوقت المناسب لتكثف البحث! خليكم على اطلاع، فالتحديثات قادمة كل يوم! https://clojure.org/news/2021/06/25/deref #Clojure #برمجة #Développement #TechNews #فرص عمل
    clojure.org
    Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem. (@ClojureDeref RSS) Highlights It is common to see complaints that both Clojure jobs and Clojure developers are hard to find. The real truth is: bo
    1 Commentaires ·0 Parts
  • Hey les amis! كاش واحد منكم سمع بAmperity؟! راهم دزاير و حققوا نجاح كبير بفضل Clojure و ClojureScript، و هلأ قيمتهم وصلت لمليار دولار! هذي الأخبار تعطي شوية أمل لذلك اللي يحوس يعمل مشروعه الخاص.

    الـ Clojure Deref هذا الاسبوع جاب لنا مجموعة من الروابط و الأخبار المثيرة في الـ Clojure ecosystem. من البودكاستات للي تتناول قوة البيانات، حتى الأدوات الجديدة اللي راح تسهل علينا الشغل. بصراحة، كاين الكثير من الإبداع في هالـ communauté، يعطيهم الصحة!

    شوفوا المقال، و يمكن تلقاوا أفكار جديدة تخليكم تتفكروا في مشاريعكم الخاصة.

    https://clojure.org/news/2021/07/16/deref
    #Clojure #TechSuccess #Innovation #Développement #Communauté
    🌟 Hey les amis! كاش واحد منكم سمع بAmperity؟! راهم دزاير و حققوا نجاح كبير بفضل Clojure و ClojureScript، و هلأ قيمتهم وصلت لمليار دولار! 🚀 هذي الأخبار تعطي شوية أمل لذلك اللي يحوس يعمل مشروعه الخاص. الـ Clojure Deref هذا الاسبوع جاب لنا مجموعة من الروابط و الأخبار المثيرة في الـ Clojure ecosystem. من البودكاستات للي تتناول قوة البيانات، حتى الأدوات الجديدة اللي راح تسهل علينا الشغل. بصراحة، كاين الكثير من الإبداع في هالـ communauté، يعطيهم الصحة! شوفوا المقال، و يمكن تلقاوا أفكار جديدة تخليكم تتفكروا في مشاريعكم الخاصة. 🌱 https://clojure.org/news/2021/07/16/deref #Clojure #TechSuccess #Innovation #Développement #Communauté
    clojure.org
    Welcome to the Clojure Deref! This is a weekly link/news roundup for the Clojure ecosystem. (@ClojureDeref RSS) Highlights Big congrats to Amperity on their Series D financing and valuation of $1B, making them another "unicorn" built subst
    1 Commentaires ·0 Parts
  • NVIDIA, National Science Foundation Support Ai2 Development of Open AI Models to Drive U.S. Scientific Leadership

    NVIDIA is partnering with the U.S. National Science Foundationto create an AI system that supports the development of multimodal language models for advancing scientific research in the United States.
    The partnership supports the NSF Mid-Scale Research Infrastructure project, called Open Multimodal AI Infrastructure to Accelerate Science.
    “Bringing AI into scientific research has been a game changer,” said Brian Stone, performing the duties of the NSF director. “NSF is proud to partner with NVIDIA to equip America’s scientists with the tools to accelerate breakthroughs. These investments are not just about enabling innovation; they are about securing U.S. global leadership in science and technology and tackling challenges once thought impossible.”
    OMAI, part of the work of the Allen Institute for AI, or Ai2, aims to build a national fully open AI ecosystem to drive scientific discovery through AI, while also advancing the science of AI itself.
    NVIDIA’s support of OMAI includes providing NVIDIA HGX B300 systems — state-of-the-art AI infrastructure built to accelerate model training and inference with exceptional efficiency — along with the NVIDIA AI Enterprise software platform, empowering OMAI to transform massive datasets into actionable intelligence and breakthrough innovations.
    NVIDIA HGX B300 systems are built with NVIDIA Blackwell Ultra GPUs and feature industry-leading high-bandwidth memory and interconnect technologies to deliver groundbreaking acceleration, scalability and efficiency to run the world’s largest models and most demanding workloads.
    “AI is the engine of modern science — and large, open models for America’s researchers will ignite the next industrial revolution,” said Jensen Huang, founder and CEO of NVIDIA. “In collaboration with NSF and Ai2, we’re accelerating innovation with state-of-the-art infrastructure that empowers U.S. scientists to generate limitless intelligence, making it America’s most powerful and renewable resource.”
    The contributions will support research teams from the University of Washington, the University of Hawaii at Hilo, the University of New Hampshire and the University of New Mexico. The public-private partnership investment in U.S. technology aligns with recent initiatives outlined by the White House AI Action Plan, which supports America’s global AI leadership.
    “The models are part of the national research infrastructure — but we can’t build the models without compute, and that’s why NVIDIA is so important to this project,” said Noah Smith, senior director of natural language processing research at Ai2.
    Opening Language Models to Advance American Researchers 
    Driving some of the fastest-growing applications in history, today’s large language modelshave many billions of parameters, or internal weights and biases learned in training. LLMs are trained on trillions of words, and multimodal LLMs can ingest images, graphs, tables and more.
    But the power of these so-called frontier models can sometimes be out of reach for scientific research when the parameters, training data, code and documentation are not openly available.
    “With the model training data in hand, you have the opportunity to trace back to particular training instances similar to a response, and also more systematically study how emerging behaviors relate to the training data,” said Smith.
    NVIDIA’s partnership with NSF to support Ai2’s OMAI initiative provides fully open model access to data, open-source data interrogation tools to help refine datasets, as well as documentation and training for early-career researchers — advancing U.S. global leadership in science and engineering.
    The Ai2 project — supported by NVIDIA technologies — pledges to make the software and models available at low or zero cost to researchers, similar to open-source code repositories and science-oriented digital libraries. It’s in line with Ai2’s previous work in creating fully open language models and multimodal models, maximizing access.
    Driving U.S. Global Leadership in Science and Engineering 
    “Winning the AI Race: America’s AI Action Plan” was announced in July by the White House, supported with executive orders to accelerate federal permitting of data center infrastructure and promote exportation of the American AI technology stack.
    The OMAI initiative aligns with White House AI Action Plan priorities, emphasizing the acceleration of AI-enabled science and supporting the creation of leading open models to enhance America’s global AI leadership in academic research and education.
    #nvidia #national #science #foundation #support
    NVIDIA, National Science Foundation Support Ai2 Development of Open AI Models to Drive U.S. Scientific Leadership
    NVIDIA is partnering with the U.S. National Science Foundationto create an AI system that supports the development of multimodal language models for advancing scientific research in the United States. The partnership supports the NSF Mid-Scale Research Infrastructure project, called Open Multimodal AI Infrastructure to Accelerate Science. “Bringing AI into scientific research has been a game changer,” said Brian Stone, performing the duties of the NSF director. “NSF is proud to partner with NVIDIA to equip America’s scientists with the tools to accelerate breakthroughs. These investments are not just about enabling innovation; they are about securing U.S. global leadership in science and technology and tackling challenges once thought impossible.” OMAI, part of the work of the Allen Institute for AI, or Ai2, aims to build a national fully open AI ecosystem to drive scientific discovery through AI, while also advancing the science of AI itself. NVIDIA’s support of OMAI includes providing NVIDIA HGX B300 systems — state-of-the-art AI infrastructure built to accelerate model training and inference with exceptional efficiency — along with the NVIDIA AI Enterprise software platform, empowering OMAI to transform massive datasets into actionable intelligence and breakthrough innovations. NVIDIA HGX B300 systems are built with NVIDIA Blackwell Ultra GPUs and feature industry-leading high-bandwidth memory and interconnect technologies to deliver groundbreaking acceleration, scalability and efficiency to run the world’s largest models and most demanding workloads. “AI is the engine of modern science — and large, open models for America’s researchers will ignite the next industrial revolution,” said Jensen Huang, founder and CEO of NVIDIA. “In collaboration with NSF and Ai2, we’re accelerating innovation with state-of-the-art infrastructure that empowers U.S. scientists to generate limitless intelligence, making it America’s most powerful and renewable resource.” The contributions will support research teams from the University of Washington, the University of Hawaii at Hilo, the University of New Hampshire and the University of New Mexico. The public-private partnership investment in U.S. technology aligns with recent initiatives outlined by the White House AI Action Plan, which supports America’s global AI leadership. “The models are part of the national research infrastructure — but we can’t build the models without compute, and that’s why NVIDIA is so important to this project,” said Noah Smith, senior director of natural language processing research at Ai2. Opening Language Models to Advance American Researchers  Driving some of the fastest-growing applications in history, today’s large language modelshave many billions of parameters, or internal weights and biases learned in training. LLMs are trained on trillions of words, and multimodal LLMs can ingest images, graphs, tables and more. But the power of these so-called frontier models can sometimes be out of reach for scientific research when the parameters, training data, code and documentation are not openly available. “With the model training data in hand, you have the opportunity to trace back to particular training instances similar to a response, and also more systematically study how emerging behaviors relate to the training data,” said Smith. NVIDIA’s partnership with NSF to support Ai2’s OMAI initiative provides fully open model access to data, open-source data interrogation tools to help refine datasets, as well as documentation and training for early-career researchers — advancing U.S. global leadership in science and engineering. The Ai2 project — supported by NVIDIA technologies — pledges to make the software and models available at low or zero cost to researchers, similar to open-source code repositories and science-oriented digital libraries. It’s in line with Ai2’s previous work in creating fully open language models and multimodal models, maximizing access. Driving U.S. Global Leadership in Science and Engineering  “Winning the AI Race: America’s AI Action Plan” was announced in July by the White House, supported with executive orders to accelerate federal permitting of data center infrastructure and promote exportation of the American AI technology stack. The OMAI initiative aligns with White House AI Action Plan priorities, emphasizing the acceleration of AI-enabled science and supporting the creation of leading open models to enhance America’s global AI leadership in academic research and education. #nvidia #national #science #foundation #support
    NVIDIA, National Science Foundation Support Ai2 Development of Open AI Models to Drive U.S. Scientific Leadership
    blogs.nvidia.com
    NVIDIA is partnering with the U.S. National Science Foundation (NSF) to create an AI system that supports the development of multimodal language models for advancing scientific research in the United States. The partnership supports the NSF Mid-Scale Research Infrastructure project, called Open Multimodal AI Infrastructure to Accelerate Science (OMAI). “Bringing AI into scientific research has been a game changer,” said Brian Stone, performing the duties of the NSF director. “NSF is proud to partner with NVIDIA to equip America’s scientists with the tools to accelerate breakthroughs. These investments are not just about enabling innovation; they are about securing U.S. global leadership in science and technology and tackling challenges once thought impossible.” OMAI, part of the work of the Allen Institute for AI, or Ai2, aims to build a national fully open AI ecosystem to drive scientific discovery through AI, while also advancing the science of AI itself. NVIDIA’s support of OMAI includes providing NVIDIA HGX B300 systems — state-of-the-art AI infrastructure built to accelerate model training and inference with exceptional efficiency — along with the NVIDIA AI Enterprise software platform, empowering OMAI to transform massive datasets into actionable intelligence and breakthrough innovations. NVIDIA HGX B300 systems are built with NVIDIA Blackwell Ultra GPUs and feature industry-leading high-bandwidth memory and interconnect technologies to deliver groundbreaking acceleration, scalability and efficiency to run the world’s largest models and most demanding workloads. “AI is the engine of modern science — and large, open models for America’s researchers will ignite the next industrial revolution,” said Jensen Huang, founder and CEO of NVIDIA. “In collaboration with NSF and Ai2, we’re accelerating innovation with state-of-the-art infrastructure that empowers U.S. scientists to generate limitless intelligence, making it America’s most powerful and renewable resource.” The contributions will support research teams from the University of Washington, the University of Hawaii at Hilo, the University of New Hampshire and the University of New Mexico. The public-private partnership investment in U.S. technology aligns with recent initiatives outlined by the White House AI Action Plan, which supports America’s global AI leadership. “The models are part of the national research infrastructure — but we can’t build the models without compute, and that’s why NVIDIA is so important to this project,” said Noah Smith, senior director of natural language processing research at Ai2. Opening Language Models to Advance American Researchers  Driving some of the fastest-growing applications in history, today’s large language models (LLMs) have many billions of parameters, or internal weights and biases learned in training. LLMs are trained on trillions of words, and multimodal LLMs can ingest images, graphs, tables and more. But the power of these so-called frontier models can sometimes be out of reach for scientific research when the parameters, training data, code and documentation are not openly available. “With the model training data in hand, you have the opportunity to trace back to particular training instances similar to a response, and also more systematically study how emerging behaviors relate to the training data,” said Smith. NVIDIA’s partnership with NSF to support Ai2’s OMAI initiative provides fully open model access to data, open-source data interrogation tools to help refine datasets, as well as documentation and training for early-career researchers — advancing U.S. global leadership in science and engineering. The Ai2 project — supported by NVIDIA technologies — pledges to make the software and models available at low or zero cost to researchers, similar to open-source code repositories and science-oriented digital libraries. It’s in line with Ai2’s previous work in creating fully open language models and multimodal models, maximizing access. Driving U.S. Global Leadership in Science and Engineering  “Winning the AI Race: America’s AI Action Plan” was announced in July by the White House, supported with executive orders to accelerate federal permitting of data center infrastructure and promote exportation of the American AI technology stack. The OMAI initiative aligns with White House AI Action Plan priorities, emphasizing the acceleration of AI-enabled science and supporting the creation of leading open models to enhance America’s global AI leadership in academic research and education.
    2 Commentaires ·0 Parts
Plus de résultats
ollo https://www.ollo.ws