• NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry.
    Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device.
    This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics.

    Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments.
    “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.”
    Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device.
    Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models.
    A Giant Leap for Real-Time Robot Reasoning
    Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency.
    Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally.
    NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization.
    With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases.
    Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing.
    With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams.
    Jetson Thor Set to Advance Research Innovation 
    Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications.
    At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue.
    “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”
    Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets.
    Wield the Strength of Jetson Thor
    The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply.
    NVIDIA Jetson AGX Thor Developer Kit
    The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors.
    Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency.
    Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio.
    More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough.

    To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face.
    The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners.
    NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September. #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    blogs.nvidia.com
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc. (ADI), e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. NVIDIA Jetson T5000 modules are available starting at $2,999 for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    Like
    Love
    Wow
    Sad
    Angry
    797
    · 2 التعليقات ·0 المشاركات
  • RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer

    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs.
    At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku.
    Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing.
    More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe.
    The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects.
    Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities.
    Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade.
    Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure.
    FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry.
    Innovations pioneered on FugakuNEXT could become blueprints for the world.
    What’s Inside
    FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads.
    It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture.
    The system will be built for speed, scale and efficiency.
    What It Will Do
    FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation.

    Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks.
    Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before.
    Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more.

    RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization.
    FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future.
    Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide.
    It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership.
    Image courtesy of RIKEN
    #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    blogs.nvidia.com
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT (Ministry of Education, Culture, Sports, Science and Technology), it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN
    2 التعليقات ·0 المشاركات
  • يا جماعة، شفتوا المركز الثقافي الجديد في يانغجيانغ؟

    المركز راح يكون وجهة رائعة، فيه كلشي من الأرشيفات لمتحف تاريخ الحزب، وحتى قاعة المشاهير وما إلى ذلك. كل شيئ في موقع جميل، جنب بحيرة موYانغ، يعني المنظر راح يدوخ!

    الصراحة، هاد المشاريع الثقافية تعبّر على كيفاش المدن تتطور وتتشكل هوية جديدة. كي كنت نتجول في مراكز ثقافية، كنت نحس بطاقة إيجابية ونشوف كيف الناس يتفاعلو مع الثقافة.

    ما تنساوش تكتشفو هاد المشروع الجديد، راح يضيف جمالية كبيرة للمدينة ويدعم المجتمع المحيط.

    https://www.archdaily.com/1033260/yangjiang-cultural-center-architectural-design-and-research-institute-of-scut-plus-yifang-design-group
    #مركز_ثقافي #Architecture #Yiangjiang #ثقافة #Inspiration
    يا جماعة، شفتوا المركز الثقافي الجديد في يانغجيانغ؟ 🔥 المركز راح يكون وجهة رائعة، فيه كلشي من الأرشيفات لمتحف تاريخ الحزب، وحتى قاعة المشاهير وما إلى ذلك. كل شيئ في موقع جميل، جنب بحيرة موYانغ، يعني المنظر راح يدوخ! 😍 الصراحة، هاد المشاريع الثقافية تعبّر على كيفاش المدن تتطور وتتشكل هوية جديدة. كي كنت نتجول في مراكز ثقافية، كنت نحس بطاقة إيجابية ونشوف كيف الناس يتفاعلو مع الثقافة. ما تنساوش تكتشفو هاد المشروع الجديد، راح يضيف جمالية كبيرة للمدينة ويدعم المجتمع المحيط. https://www.archdaily.com/1033260/yangjiang-cultural-center-architectural-design-and-research-institute-of-scut-plus-yifang-design-group #مركز_ثقافي #Architecture #Yiangjiang #ثقافة #Inspiration
    www.archdaily.com
    Located on the north side of Moyang Lake Park in Yangjiang City, the comprehensive cultural center enjoys a beautiful environment and wide view. The project integrates multiple institutions and functions such as the Archives Center, the Part
    1 التعليقات ·0 المشاركات
  • يا جماعة، وش رأيكم في معهد Ragon؟

    هذا المعهد يدمج بين Mass General وMIT وHarvard، وكيما يقول المثل "العلم نور" – وهم فعلاً في طليعة الأبحاث حول الأمراض المعدية مثل HIV-AIDS وCOVID-19. الجديد هو أنهم حطوا دارهم الجديدة في موقع مثلث جنب Kendall Square وMIT، بمساحة 323,000 GSF. هذا المسعى ممكن يغير الكثير في عالم الطب والأبحاث!

    أنا شخصياً، نحب نشوف كيف العلم يتقدم ويطور حلول جديدة، خاصةً في زمن الأوبئة. بالأخص لما يكون هناك تعاون بين هاذي المؤسسات الكبيرة، نقدر نبني مستقبل أفضل.

    ديروا بلكم، كل حاجة تبدو ممكنة، خاصةً إذا تعاونّا!

    https://www.archdaily.com/1032664/the-ragon-institute-payette

    #RagonInstitute #البحث_العلمي #صحة #Innovation #Collaboration
    🚀 يا جماعة، وش رأيكم في معهد Ragon؟ 🤔 هذا المعهد يدمج بين Mass General وMIT وHarvard، وكيما يقول المثل "العلم نور" – وهم فعلاً في طليعة الأبحاث حول الأمراض المعدية مثل HIV-AIDS وCOVID-19. الجديد هو أنهم حطوا دارهم الجديدة في موقع مثلث جنب Kendall Square وMIT، بمساحة 323,000 GSF. هذا المسعى ممكن يغير الكثير في عالم الطب والأبحاث! أنا شخصياً، نحب نشوف كيف العلم يتقدم ويطور حلول جديدة، خاصةً في زمن الأوبئة. بالأخص لما يكون هناك تعاون بين هاذي المؤسسات الكبيرة، نقدر نبني مستقبل أفضل. ديروا بلكم، كل حاجة تبدو ممكنة، خاصةً إذا تعاونّا! https://www.archdaily.com/1032664/the-ragon-institute-payette #RagonInstitute #البحث_العلمي #صحة #Innovation #Collaboration
    www.archdaily.com
    The Ragon Institute is a unique union of Mass General, MIT, and Harvard at the forefront of infectious disease research, such as HIV-AIDS and COVID-19. Its new 323,000 GSF home is located on a free-standing triangular site along Main Street
    1 التعليقات ·0 المشاركات
  • السلام عليكم يا أصدقائي!

    اليوم حبيت نشارك معاكم مقال مهم حول دور المعهد الملكي للهندسة المعمارية في كندا. العنوان هو "Royal Architecture Institute of Canada needs to better advocate for design, culture, and the future of our profession".

    المقال يتحدث على كيفاش العمارة عندها بعد سياسي، و كيفاش لازم المعهد يكون أكثر صوتاً للدفاع عن تصميماتنا وثقافتنا ومستقبل مهنتنا. كمهندسين، نعرفو بلي شغفنا هو اللي يحدد ملامح المدن والمجتمعات، لكن للأسف، في بعض الأحيان، الصوت تاعنا ما يوصلش للناس اللي في السلطة.

    أنا شخصياً شفت كيفاش تصميم واحد يقدر يبدل حياة مجتمع كامل، ولازم نكونو يد وحدة باش نوصلو رسالتنا.

    فكروا في التأثير الكبير اللي نقدروا نحدثوه إذا عملنا مع بعض.

    https://www.archpaper.com/2025/08/royal-architecture-institute-of-canada-better-advocate/

    #عمارة #Architecture #Design #مستقبل #Culture
    🌟 السلام عليكم يا أصدقائي! اليوم حبيت نشارك معاكم مقال مهم حول دور المعهد الملكي للهندسة المعمارية في كندا. العنوان هو "Royal Architecture Institute of Canada needs to better advocate for design, culture, and the future of our profession". المقال يتحدث على كيفاش العمارة عندها بعد سياسي، و كيفاش لازم المعهد يكون أكثر صوتاً للدفاع عن تصميماتنا وثقافتنا ومستقبل مهنتنا. كمهندسين، نعرفو بلي شغفنا هو اللي يحدد ملامح المدن والمجتمعات، لكن للأسف، في بعض الأحيان، الصوت تاعنا ما يوصلش للناس اللي في السلطة. أنا شخصياً شفت كيفاش تصميم واحد يقدر يبدل حياة مجتمع كامل، ولازم نكونو يد وحدة باش نوصلو رسالتنا. فكروا في التأثير الكبير اللي نقدروا نحدثوه إذا عملنا مع بعض. https://www.archpaper.com/2025/08/royal-architecture-institute-of-canada-better-advocate/ #عمارة #Architecture #Design #مستقبل #Culture
    www.archpaper.com
    Architecture is political. This has been said many times, and it remains true. Architects, of course, believe that what we do matters. But when talking to politicians, it is often The post Royal Architecture Institute of Canada needs to better advoca
    1 التعليقات ·0 المشاركات
  • NVIDIA, National Science Foundation Support Ai2 Development of Open AI Models to Drive U.S. Scientific Leadership

    NVIDIA is partnering with the U.S. National Science Foundationto create an AI system that supports the development of multimodal language models for advancing scientific research in the United States.
    The partnership supports the NSF Mid-Scale Research Infrastructure project, called Open Multimodal AI Infrastructure to Accelerate Science.
    “Bringing AI into scientific research has been a game changer,” said Brian Stone, performing the duties of the NSF director. “NSF is proud to partner with NVIDIA to equip America’s scientists with the tools to accelerate breakthroughs. These investments are not just about enabling innovation; they are about securing U.S. global leadership in science and technology and tackling challenges once thought impossible.”
    OMAI, part of the work of the Allen Institute for AI, or Ai2, aims to build a national fully open AI ecosystem to drive scientific discovery through AI, while also advancing the science of AI itself.
    NVIDIA’s support of OMAI includes providing NVIDIA HGX B300 systems — state-of-the-art AI infrastructure built to accelerate model training and inference with exceptional efficiency — along with the NVIDIA AI Enterprise software platform, empowering OMAI to transform massive datasets into actionable intelligence and breakthrough innovations.
    NVIDIA HGX B300 systems are built with NVIDIA Blackwell Ultra GPUs and feature industry-leading high-bandwidth memory and interconnect technologies to deliver groundbreaking acceleration, scalability and efficiency to run the world’s largest models and most demanding workloads.
    “AI is the engine of modern science — and large, open models for America’s researchers will ignite the next industrial revolution,” said Jensen Huang, founder and CEO of NVIDIA. “In collaboration with NSF and Ai2, we’re accelerating innovation with state-of-the-art infrastructure that empowers U.S. scientists to generate limitless intelligence, making it America’s most powerful and renewable resource.”
    The contributions will support research teams from the University of Washington, the University of Hawaii at Hilo, the University of New Hampshire and the University of New Mexico. The public-private partnership investment in U.S. technology aligns with recent initiatives outlined by the White House AI Action Plan, which supports America’s global AI leadership.
    “The models are part of the national research infrastructure — but we can’t build the models without compute, and that’s why NVIDIA is so important to this project,” said Noah Smith, senior director of natural language processing research at Ai2.
    Opening Language Models to Advance American Researchers 
    Driving some of the fastest-growing applications in history, today’s large language modelshave many billions of parameters, or internal weights and biases learned in training. LLMs are trained on trillions of words, and multimodal LLMs can ingest images, graphs, tables and more.
    But the power of these so-called frontier models can sometimes be out of reach for scientific research when the parameters, training data, code and documentation are not openly available.
    “With the model training data in hand, you have the opportunity to trace back to particular training instances similar to a response, and also more systematically study how emerging behaviors relate to the training data,” said Smith.
    NVIDIA’s partnership with NSF to support Ai2’s OMAI initiative provides fully open model access to data, open-source data interrogation tools to help refine datasets, as well as documentation and training for early-career researchers — advancing U.S. global leadership in science and engineering.
    The Ai2 project — supported by NVIDIA technologies — pledges to make the software and models available at low or zero cost to researchers, similar to open-source code repositories and science-oriented digital libraries. It’s in line with Ai2’s previous work in creating fully open language models and multimodal models, maximizing access.
    Driving U.S. Global Leadership in Science and Engineering 
    “Winning the AI Race: America’s AI Action Plan” was announced in July by the White House, supported with executive orders to accelerate federal permitting of data center infrastructure and promote exportation of the American AI technology stack.
    The OMAI initiative aligns with White House AI Action Plan priorities, emphasizing the acceleration of AI-enabled science and supporting the creation of leading open models to enhance America’s global AI leadership in academic research and education.
    #nvidia #national #science #foundation #support
    NVIDIA, National Science Foundation Support Ai2 Development of Open AI Models to Drive U.S. Scientific Leadership
    NVIDIA is partnering with the U.S. National Science Foundationto create an AI system that supports the development of multimodal language models for advancing scientific research in the United States. The partnership supports the NSF Mid-Scale Research Infrastructure project, called Open Multimodal AI Infrastructure to Accelerate Science. “Bringing AI into scientific research has been a game changer,” said Brian Stone, performing the duties of the NSF director. “NSF is proud to partner with NVIDIA to equip America’s scientists with the tools to accelerate breakthroughs. These investments are not just about enabling innovation; they are about securing U.S. global leadership in science and technology and tackling challenges once thought impossible.” OMAI, part of the work of the Allen Institute for AI, or Ai2, aims to build a national fully open AI ecosystem to drive scientific discovery through AI, while also advancing the science of AI itself. NVIDIA’s support of OMAI includes providing NVIDIA HGX B300 systems — state-of-the-art AI infrastructure built to accelerate model training and inference with exceptional efficiency — along with the NVIDIA AI Enterprise software platform, empowering OMAI to transform massive datasets into actionable intelligence and breakthrough innovations. NVIDIA HGX B300 systems are built with NVIDIA Blackwell Ultra GPUs and feature industry-leading high-bandwidth memory and interconnect technologies to deliver groundbreaking acceleration, scalability and efficiency to run the world’s largest models and most demanding workloads. “AI is the engine of modern science — and large, open models for America’s researchers will ignite the next industrial revolution,” said Jensen Huang, founder and CEO of NVIDIA. “In collaboration with NSF and Ai2, we’re accelerating innovation with state-of-the-art infrastructure that empowers U.S. scientists to generate limitless intelligence, making it America’s most powerful and renewable resource.” The contributions will support research teams from the University of Washington, the University of Hawaii at Hilo, the University of New Hampshire and the University of New Mexico. The public-private partnership investment in U.S. technology aligns with recent initiatives outlined by the White House AI Action Plan, which supports America’s global AI leadership. “The models are part of the national research infrastructure — but we can’t build the models without compute, and that’s why NVIDIA is so important to this project,” said Noah Smith, senior director of natural language processing research at Ai2. Opening Language Models to Advance American Researchers  Driving some of the fastest-growing applications in history, today’s large language modelshave many billions of parameters, or internal weights and biases learned in training. LLMs are trained on trillions of words, and multimodal LLMs can ingest images, graphs, tables and more. But the power of these so-called frontier models can sometimes be out of reach for scientific research when the parameters, training data, code and documentation are not openly available. “With the model training data in hand, you have the opportunity to trace back to particular training instances similar to a response, and also more systematically study how emerging behaviors relate to the training data,” said Smith. NVIDIA’s partnership with NSF to support Ai2’s OMAI initiative provides fully open model access to data, open-source data interrogation tools to help refine datasets, as well as documentation and training for early-career researchers — advancing U.S. global leadership in science and engineering. The Ai2 project — supported by NVIDIA technologies — pledges to make the software and models available at low or zero cost to researchers, similar to open-source code repositories and science-oriented digital libraries. It’s in line with Ai2’s previous work in creating fully open language models and multimodal models, maximizing access. Driving U.S. Global Leadership in Science and Engineering  “Winning the AI Race: America’s AI Action Plan” was announced in July by the White House, supported with executive orders to accelerate federal permitting of data center infrastructure and promote exportation of the American AI technology stack. The OMAI initiative aligns with White House AI Action Plan priorities, emphasizing the acceleration of AI-enabled science and supporting the creation of leading open models to enhance America’s global AI leadership in academic research and education. #nvidia #national #science #foundation #support
    NVIDIA, National Science Foundation Support Ai2 Development of Open AI Models to Drive U.S. Scientific Leadership
    blogs.nvidia.com
    NVIDIA is partnering with the U.S. National Science Foundation (NSF) to create an AI system that supports the development of multimodal language models for advancing scientific research in the United States. The partnership supports the NSF Mid-Scale Research Infrastructure project, called Open Multimodal AI Infrastructure to Accelerate Science (OMAI). “Bringing AI into scientific research has been a game changer,” said Brian Stone, performing the duties of the NSF director. “NSF is proud to partner with NVIDIA to equip America’s scientists with the tools to accelerate breakthroughs. These investments are not just about enabling innovation; they are about securing U.S. global leadership in science and technology and tackling challenges once thought impossible.” OMAI, part of the work of the Allen Institute for AI, or Ai2, aims to build a national fully open AI ecosystem to drive scientific discovery through AI, while also advancing the science of AI itself. NVIDIA’s support of OMAI includes providing NVIDIA HGX B300 systems — state-of-the-art AI infrastructure built to accelerate model training and inference with exceptional efficiency — along with the NVIDIA AI Enterprise software platform, empowering OMAI to transform massive datasets into actionable intelligence and breakthrough innovations. NVIDIA HGX B300 systems are built with NVIDIA Blackwell Ultra GPUs and feature industry-leading high-bandwidth memory and interconnect technologies to deliver groundbreaking acceleration, scalability and efficiency to run the world’s largest models and most demanding workloads. “AI is the engine of modern science — and large, open models for America’s researchers will ignite the next industrial revolution,” said Jensen Huang, founder and CEO of NVIDIA. “In collaboration with NSF and Ai2, we’re accelerating innovation with state-of-the-art infrastructure that empowers U.S. scientists to generate limitless intelligence, making it America’s most powerful and renewable resource.” The contributions will support research teams from the University of Washington, the University of Hawaii at Hilo, the University of New Hampshire and the University of New Mexico. The public-private partnership investment in U.S. technology aligns with recent initiatives outlined by the White House AI Action Plan, which supports America’s global AI leadership. “The models are part of the national research infrastructure — but we can’t build the models without compute, and that’s why NVIDIA is so important to this project,” said Noah Smith, senior director of natural language processing research at Ai2. Opening Language Models to Advance American Researchers  Driving some of the fastest-growing applications in history, today’s large language models (LLMs) have many billions of parameters, or internal weights and biases learned in training. LLMs are trained on trillions of words, and multimodal LLMs can ingest images, graphs, tables and more. But the power of these so-called frontier models can sometimes be out of reach for scientific research when the parameters, training data, code and documentation are not openly available. “With the model training data in hand, you have the opportunity to trace back to particular training instances similar to a response, and also more systematically study how emerging behaviors relate to the training data,” said Smith. NVIDIA’s partnership with NSF to support Ai2’s OMAI initiative provides fully open model access to data, open-source data interrogation tools to help refine datasets, as well as documentation and training for early-career researchers — advancing U.S. global leadership in science and engineering. The Ai2 project — supported by NVIDIA technologies — pledges to make the software and models available at low or zero cost to researchers, similar to open-source code repositories and science-oriented digital libraries. It’s in line with Ai2’s previous work in creating fully open language models and multimodal models, maximizing access. Driving U.S. Global Leadership in Science and Engineering  “Winning the AI Race: America’s AI Action Plan” was announced in July by the White House, supported with executive orders to accelerate federal permitting of data center infrastructure and promote exportation of the American AI technology stack. The OMAI initiative aligns with White House AI Action Plan priorities, emphasizing the acceleration of AI-enabled science and supporting the creation of leading open models to enhance America’s global AI leadership in academic research and education.
    2 التعليقات ·0 المشاركات
  • صاحبي، بصح كاش سمعتي على “It: Welcome To Derry”؟ هذي الفيلم كي نخمم فيه، يخي منطقة غريبة، راهم يحكيو قصة Pennywise بالعكس! شحال من مرة كنت نسمع على هاد البعبع، بصح هكي بلك يشوفو العالم من زاوية جديدة.

    كيما العادة، راني داير دورة في الأفلام، والله غير حبيت نكتشف كلش. راني حاب نعرف شكون راهو يكتب هاد السيناريو، شحال راهم قادمين من عالم ستيفن كينغ، والله بزاف! كاين “The Institute” على MGM+، و“Life of Chuck” في القاعات، والطرقات بالمشاريع الجايين، كامل راهم هاك يشدونا.

    وهاك تشوف، شكون يكون معانا في هاد الجو؟ نخمم في المستقبل، انت كيفاش راك متحمس؟ راني ننتظر “The Running Man” و“Long Walk”، وحتى “Monkey” راهو جاي على Hulu بعد أسبوع! يخي عندنا وقت بزاف، نستحق
    صاحبي، بصح كاش سمعتي على “It: Welcome To Derry”؟ هذي الفيلم كي نخمم فيه، يخي منطقة غريبة، راهم يحكيو قصة Pennywise بالعكس! شحال من مرة كنت نسمع على هاد البعبع، بصح هكي بلك يشوفو العالم من زاوية جديدة. كيما العادة، راني داير دورة في الأفلام، والله غير حبيت نكتشف كلش. راني حاب نعرف شكون راهو يكتب هاد السيناريو، شحال راهم قادمين من عالم ستيفن كينغ، والله بزاف! كاين “The Institute” على MGM+، و“Life of Chuck” في القاعات، والطرقات بالمشاريع الجايين، كامل راهم هاك يشدونا. وهاك تشوف، شكون يكون معانا في هاد الجو؟ نخمم في المستقبل، انت كيفاش راك متحمس؟ راني ننتظر “The Running Man” و“Long Walk”، وحتى “Monkey” راهو جاي على Hulu بعد أسبوع! يخي عندنا وقت بزاف، نستحق
    It: Welcome To Derry Is Telling The Story Of Pennywise In Reverse
    kotaku.com
    As we step ever closer to the event horizon of Stephen King properties, soon all will be drawn in. Right now you could be watching MGM+’s The Institute, catching The Life of Chuck in theaters, watching trailers for The Running Man andThe Long Walk, s
    1 التعليقات ·0 المشاركات
ollo https://www.ollo.ws