• حبيت نقوللكم، البرمجة مش غير كتابة كود، بل هي فهم عميق للمفاهيم!

    فيديو جديد رح يتعمق في "٥ مفاهيم ٩٠٪؜ من المبرمجين فاهمينها غلط". رح نتكلم عن الفروقات بين المكتبة (Library) وإطار العمل (Framework)، وكذا معنى واجهة برمجة التطبيقات (API) ومعاني المصادقة (Authentication) والتفويض (Authorization).

    كتجربة شخصية، عشت صعوبات في توضيح هالمفاهيم، حتى بعد ما نعرفها. كنا نعتقد انو الكود هو كلش، لكن في النهاية، كيفاش تشرح المفاهيم هو اللي يفتح لك الأبواب في مقابلات العمل!

    خليك متميز وفهم عميق رح يفرق معاك. تابع الفيديو وخلينا نطوروا مهاراتنا مع بعض!

    https://www.youtube.com/watch?v=V1x2Pq6bV8s
    #برمجة #مفاهيم #Coding #API #Tech
    🌟 حبيت نقوللكم، البرمجة مش غير كتابة كود، بل هي فهم عميق للمفاهيم! 🌟 فيديو جديد رح يتعمق في "٥ مفاهيم ٩٠٪؜ من المبرمجين فاهمينها غلط". رح نتكلم عن الفروقات بين المكتبة (Library) وإطار العمل (Framework)، وكذا معنى واجهة برمجة التطبيقات (API) ومعاني المصادقة (Authentication) والتفويض (Authorization). 🤔💻 كتجربة شخصية، عشت صعوبات في توضيح هالمفاهيم، حتى بعد ما نعرفها. كنا نعتقد انو الكود هو كلش، لكن في النهاية، كيفاش تشرح المفاهيم هو اللي يفتح لك الأبواب في مقابلات العمل! خليك متميز وفهم عميق رح يفرق معاك. تابع الفيديو وخلينا نطوروا مهاراتنا مع بعض! 😉 https://www.youtube.com/watch?v=V1x2Pq6bV8s #برمجة #مفاهيم #Coding #API #Tech
    Like
    Love
    Wow
    Sad
    Angry
    704
    · 1 Comments ·0 Shares
  • NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry.
    Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device.
    This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics.

    Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments.
    “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.”
    Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device.
    Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models.
    A Giant Leap for Real-Time Robot Reasoning
    Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency.
    Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally.
    NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization.
    With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases.
    Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing.
    With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams.
    Jetson Thor Set to Advance Research Innovation 
    Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications.
    At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue.
    “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”
    Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets.
    Wield the Strength of Jetson Thor
    The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply.
    NVIDIA Jetson AGX Thor Developer Kit
    The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors.
    Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency.
    Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio.
    More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough.

    To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face.
    The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners.
    NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September. #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    blogs.nvidia.com
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc. (ADI), e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. NVIDIA Jetson T5000 modules are available starting at $2,999 for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    Like
    Love
    Wow
    Sad
    Angry
    797
    · 2 Comments ·0 Shares
  • يا أصحاب، عندي خبر يفرح قلوب عشاق Nintendo Switch! الكاميرا الجديدة HORI Piranha Plant راهي معروضة بخصم 33%! من 60 دولار ل40 دولار فقط!

    الكاميرا هذي مصممة خصيصًا للـ Switch 2، وبالصح كي تشوفها، تحس روحك غادي في مملكة الفطر! الكاميرا تشبه Amiibo أكثر من كونها كاميرا، وفيها ميزات مميزة، كيما القدرة على فصل النبات عن الوعاء واستخدامه في وضع النقل. حتى باب الخصوصية موجود، غلق فم النبات وخلاص!

    أنا جربتها في Mario Kart مع الأصحاب، حبيت كيفاش كل واحد يقدر يصور روحو وهو يتسابق، خاصة في اللحظات المضحكة. تحس روحك في لعبة!

    ما تفوتوش الفرصة هذي، فرص كيما هذي ما تجيش كل يوم!

    https://www.engadget.com/deals/horis-piranha-plant-camera-for-nintendo-switch-2-is-33-percent-off-right-now-145031014.html?src
    🎮💥 يا أصحاب، عندي خبر يفرح قلوب عشاق Nintendo Switch! الكاميرا الجديدة HORI Piranha Plant راهي معروضة بخصم 33%! من 60 دولار ل40 دولار فقط! 😍 الكاميرا هذي مصممة خصيصًا للـ Switch 2، وبالصح كي تشوفها، تحس روحك غادي في مملكة الفطر! 🍄 الكاميرا تشبه Amiibo أكثر من كونها كاميرا، وفيها ميزات مميزة، كيما القدرة على فصل النبات عن الوعاء واستخدامه في وضع النقل. حتى باب الخصوصية موجود، غلق فم النبات وخلاص! 😂 أنا جربتها في Mario Kart مع الأصحاب، حبيت كيفاش كل واحد يقدر يصور روحو وهو يتسابق، خاصة في اللحظات المضحكة. تحس روحك في لعبة! ما تفوتوش الفرصة هذي، فرص كيما هذي ما تجيش كل يوم! https://www.engadget.com/deals/horis-piranha-plant-camera-for-nintendo-switch-2-is-33-percent-off-right-now-145031014.html?src
    www.engadget.com
    Even though the Switch 2 basically just came out, we're already starting to see discounts on some of its accessories. One of the more charming peripherals, the HORI Piranha Plant camera, is on sale right now for only $40. That's $20 off and a record-
    Like
    Love
    Wow
    Sad
    Angry
    966
    · 1 Comments ·0 Shares
  • كي تكون جديد في عالم الشغل، يقعدو يحوسو عليك نورمال! في الفيديو هذا، منير حرير يشارك معانا 9 خطوات أساسية باش تحصل على أول وظيفة بعد التخرج. شكون فيكم حاب يعرف كيفاش يحقق حلمه بطرق فعالة وتطبيقية؟

    الفيديو يطرح أفكار جديدة على كيفية استغلال مهاراتك وتحسين فرصك في سوق العمل. بصراحة، لو كنت مكانكم، راني نشوف فيه كفرصة ذهبية! وكي نعود نفكر في أول تجربة لي، كان صعيب، لكن بالتوجيه الصحيح كل شيئ يولي سهل.

    لا تفوتو هذه الحلقة، يمكن تكون بداية جديدة ليكم في مشواركم المهني!

    https://www.youtube.com/watch?v=P_Dz_F6rgDs

    #التوظيف #النجاح #career #Horizonالعمل #حارث_podcast
    🚀 كي تكون جديد في عالم الشغل، يقعدو يحوسو عليك نورمال! في الفيديو هذا، منير حرير يشارك معانا 9 خطوات أساسية باش تحصل على أول وظيفة بعد التخرج. 🤫 شكون فيكم حاب يعرف كيفاش يحقق حلمه بطرق فعالة وتطبيقية؟ الفيديو يطرح أفكار جديدة على كيفية استغلال مهاراتك وتحسين فرصك في سوق العمل. بصراحة، لو كنت مكانكم، راني نشوف فيه كفرصة ذهبية! 🤩 وكي نعود نفكر في أول تجربة لي، كان صعيب، لكن بالتوجيه الصحيح كل شيئ يولي سهل. لا تفوتو هذه الحلقة، يمكن تكون بداية جديدة ليكم في مشواركم المهني! https://www.youtube.com/watch?v=P_Dz_F6rgDs #التوظيف #النجاح #career #Horizonالعمل #حارث_podcast
    Like
    Love
    Wow
    Sad
    Angry
    899
    · 1 Comments ·0 Shares
  • كيفاش حالكم يا أصدقائي؟

    في يوم من الأيام، كنت جالس مع واحد من الأصدقاء نتناقش في عالم التكنولوجيا والاستثمار. قالي بلي لازم نكون حذرين مع المشاريع الجديدة اللي تظهر، خصوصاً تلك اللي تتعلق بـ SPVs (Special Purpose Vehicles). صراحة، ما كنتش نعرف بلي OpenAI بدات تشير لهذا الموضوع بجدية. المقال الجديد يقول بلي OpenAI مش وحدها، كاين شركات أخرى تحذر من هالمشاريع غير المرخصة!

    ديما نحس بلي عالم الاستثمار يشبه لعبة شطرنج، وين لازم تكون ذكي وتفكر قبل ما تاخذ خطوة. المهم، لازم نكون واعيين ونفهمو المخاطر قبل ما نتسرع.

    خلونا نكونو في الطليعة ونفكر مليح قبل ما نغامرو.

    https://techcrunch.com/2025/08/23/openai-warns-against-spvs-and-other-unauthorized-investments/
    #استثمار #تكنولوجيا #OpenAI #SPVs #تحذيرات
    كيفاش حالكم يا أصدقائي؟ 😄 في يوم من الأيام، كنت جالس مع واحد من الأصدقاء نتناقش في عالم التكنولوجيا والاستثمار. قالي بلي لازم نكون حذرين مع المشاريع الجديدة اللي تظهر، خصوصاً تلك اللي تتعلق بـ SPVs (Special Purpose Vehicles). صراحة، ما كنتش نعرف بلي OpenAI بدات تشير لهذا الموضوع بجدية. المقال الجديد يقول بلي OpenAI مش وحدها، كاين شركات أخرى تحذر من هالمشاريع غير المرخصة! ديما نحس بلي عالم الاستثمار يشبه لعبة شطرنج، وين لازم تكون ذكي وتفكر قبل ما تاخذ خطوة. المهم، لازم نكون واعيين ونفهمو المخاطر قبل ما نتسرع. خلونا نكونو في الطليعة ونفكر مليح قبل ما نغامرو. https://techcrunch.com/2025/08/23/openai-warns-against-spvs-and-other-unauthorized-investments/ #استثمار #تكنولوجيا #OpenAI #SPVs #تحذيرات
    techcrunch.com
    OpenAI isn't the only AI company looking to crack down on SPVs.
    Like
    Love
    Wow
    Sad
    Angry
    844
    · 1 Comments ·0 Shares
  • هاتف Google Pixel 10 (Pro) يفقد سعة البطارية بعد 200 دورة شحن فقط

    خلال الأشهر الماضية، أغضبت جوجل العديد من مستخدمي هواتف Pixel بسبب التحديثات الإلزامية للبرامج التي قلصت بشكل كبير عمر بطاريات Pixel 4a وPixel 6a.وقد جاءت هذه التحديثات بهدف منع ارتفاع حرارة البطاريات القديمة، والذي في أسوأ الحالات قد يؤدي إلى حريق، لكن يبدو أن التحديث لم يمنع هذه الحوادث بشكل كامل.مع إطلاق Pixel 9a، قدمت جوجل ميزة جديدة باسم “Battery Health Assistance”، والتي لا يمكن للمستخدمين تعطيلها يدويًا.وأكدت جوجل لموقع Android Authority أن هذه الميزة ستكون جزءًا من سلسلة Pixel 10 أيضًا، ولن يتمكن المستخدمون من إيقافها. وبالتحديد، يعني هذا أن جوجل تقلل جهد البطارية تلقائيًا بعد 200 دورة شحن، ويستمر خفض الجهد حتى 1,000 دورة شحن.يؤدي انخفاض الجهد إلى تقصير عمر البطارية لأن سعتها تقل عمليًا، كما يستغرق شحن الهاتف وقتًا أطول. لم تحدد جوجل مقدار النقص في عمر البطارية بعد 200 دورة شحن.رسميًا، من المفترض أن تحتفظ بطاريات هواتف Pixel بنسبة 80% من سعتها الأصلية بعد 1,000 دورة شحن، لكن يبقى السؤال حول مقدار السعة الفعلية التي سيتمكن مالكو Pixel 10 من استخدامها.ومن المزعج أن جوجل لا تمنح العملاء خيار تعطيل هذه الميزة، بينما تمنح شركات أخرى مستخدميها حرية أكبر.على سبيل المثال، هواتف iPhone من أبل لا تقلل من طاقة البطارية نفسها بل من أداء المعالج لتجنب ارتفاع الفولتية، ومع ذلك يمكن للمستخدمين تعطيل هذه “التحسينات” نتيجة دعوى قضائية جماعية.المصدر
    #هاتف #google #pixel #pro #يفقد
    هاتف Google Pixel 10 (Pro) يفقد سعة البطارية بعد 200 دورة شحن فقط
    خلال الأشهر الماضية، أغضبت جوجل العديد من مستخدمي هواتف Pixel بسبب التحديثات الإلزامية للبرامج التي قلصت بشكل كبير عمر بطاريات Pixel 4a وPixel 6a.وقد جاءت هذه التحديثات بهدف منع ارتفاع حرارة البطاريات القديمة، والذي في أسوأ الحالات قد يؤدي إلى حريق، لكن يبدو أن التحديث لم يمنع هذه الحوادث بشكل كامل.مع إطلاق Pixel 9a، قدمت جوجل ميزة جديدة باسم “Battery Health Assistance”، والتي لا يمكن للمستخدمين تعطيلها يدويًا.وأكدت جوجل لموقع Android Authority أن هذه الميزة ستكون جزءًا من سلسلة Pixel 10 أيضًا، ولن يتمكن المستخدمون من إيقافها. وبالتحديد، يعني هذا أن جوجل تقلل جهد البطارية تلقائيًا بعد 200 دورة شحن، ويستمر خفض الجهد حتى 1,000 دورة شحن.يؤدي انخفاض الجهد إلى تقصير عمر البطارية لأن سعتها تقل عمليًا، كما يستغرق شحن الهاتف وقتًا أطول. لم تحدد جوجل مقدار النقص في عمر البطارية بعد 200 دورة شحن.رسميًا، من المفترض أن تحتفظ بطاريات هواتف Pixel بنسبة 80% من سعتها الأصلية بعد 1,000 دورة شحن، لكن يبقى السؤال حول مقدار السعة الفعلية التي سيتمكن مالكو Pixel 10 من استخدامها.ومن المزعج أن جوجل لا تمنح العملاء خيار تعطيل هذه الميزة، بينما تمنح شركات أخرى مستخدميها حرية أكبر.على سبيل المثال، هواتف iPhone من أبل لا تقلل من طاقة البطارية نفسها بل من أداء المعالج لتجنب ارتفاع الفولتية، ومع ذلك يمكن للمستخدمين تعطيل هذه “التحسينات” نتيجة دعوى قضائية جماعية.المصدر #هاتف #google #pixel #pro #يفقد
    هاتف Google Pixel 10 (Pro) يفقد سعة البطارية بعد 200 دورة شحن فقط
    www.unlimit-tech.com
    خلال الأشهر الماضية، أغضبت جوجل العديد من مستخدمي هواتف Pixel بسبب التحديثات الإلزامية للبرامج التي قلصت بشكل كبير عمر بطاريات Pixel 4a وPixel 6a.وقد جاءت هذه التحديثات بهدف منع ارتفاع حرارة البطاريات القديمة، والذي في أسوأ الحالات قد يؤدي إلى حريق، لكن يبدو أن التحديث لم يمنع هذه الحوادث بشكل كامل.مع إطلاق Pixel 9a، قدمت جوجل ميزة جديدة باسم “Battery Health Assistance”، والتي لا يمكن للمستخدمين تعطيلها يدويًا.وأكدت جوجل لموقع Android Authority أن هذه الميزة ستكون جزءًا من سلسلة Pixel 10 أيضًا، ولن يتمكن المستخدمون من إيقافها. وبالتحديد، يعني هذا أن جوجل تقلل جهد البطارية تلقائيًا بعد 200 دورة شحن، ويستمر خفض الجهد حتى 1,000 دورة شحن.يؤدي انخفاض الجهد إلى تقصير عمر البطارية لأن سعتها تقل عمليًا، كما يستغرق شحن الهاتف وقتًا أطول. لم تحدد جوجل مقدار النقص في عمر البطارية بعد 200 دورة شحن.رسميًا، من المفترض أن تحتفظ بطاريات هواتف Pixel بنسبة 80% من سعتها الأصلية بعد 1,000 دورة شحن، لكن يبقى السؤال حول مقدار السعة الفعلية التي سيتمكن مالكو Pixel 10 من استخدامها.ومن المزعج أن جوجل لا تمنح العملاء خيار تعطيل هذه الميزة، بينما تمنح شركات أخرى مستخدميها حرية أكبر.على سبيل المثال، هواتف iPhone من أبل لا تقلل من طاقة البطارية نفسها بل من أداء المعالج لتجنب ارتفاع الفولتية، ومع ذلك يمكن للمستخدمين تعطيل هذه “التحسينات” نتيجة دعوى قضائية جماعية.المصدر
    Like
    Love
    Wow
    Angry
    Sad
    433
    · 2 Comments ·0 Shares
  • Hey les gamers! عندي لكم خبر مزيان عن الكاميرا HORI Piranha Plant المخصصة لـ Nintendo Switch 2!

    الحمد لله، الكاميرا راهي في تخفيض - فقط $40! وهادي فرصة ما تتعوضش، حيث أنك تربح $20. الكاميرا هذي مصممة باه تشغل مع الألعاب كيف Mario Kart World وألعاب أخرى، وراها أسهل في الاستخدام، مجرد Plug and Play.

    أنا جربتها شخصياً، والمظهر تاعها كول، شبيهة بأميبو! وعندها وظيفة إضافية كستاند، وتقدر تروح بيها وين ما تحب. وبالمناسبة، حتى عندها ميزة الخصوصية!

    بصراحة، ما تفوتوش هاد العرض، حيث ما تعرفوش واش يجي في المستقبل!

    https://www.engadget.com/deals/pick-up-the-hori-piranha-plant-camera-for-switch-2-while-its-on-sale-for-40-145031803.html?src=rss

    #Nintendo #GamingDeals #Switch2 #PiranhaPlant #TechSale
    🎮 Hey les gamers! عندي لكم خبر مزيان عن الكاميرا HORI Piranha Plant المخصصة لـ Nintendo Switch 2! 😍 الحمد لله، الكاميرا راهي في تخفيض - فقط $40! وهادي فرصة ما تتعوضش، حيث أنك تربح $20. الكاميرا هذي مصممة باه تشغل مع الألعاب كيف Mario Kart World وألعاب أخرى، وراها أسهل في الاستخدام، مجرد Plug and Play. أنا جربتها شخصياً، والمظهر تاعها كول، شبيهة بأميبو! وعندها وظيفة إضافية كستاند، وتقدر تروح بيها وين ما تحب. وبالمناسبة، حتى عندها ميزة الخصوصية! 😄 بصراحة، ما تفوتوش هاد العرض، حيث ما تعرفوش واش يجي في المستقبل! https://www.engadget.com/deals/pick-up-the-hori-piranha-plant-camera-for-switch-2-while-its-on-sale-for-40-145031803.html?src=rss #Nintendo #GamingDeals #Switch2 #PiranhaPlant #TechSale
    www.engadget.com
    The HORI Piranha Plant camera for the Nintendo Switch 2 is on sale for just $40, which is a discount of $20 and a record-low price. This is a great deal for those who own a Switch 2 and want to take advantage of the camera functionality in games like
    1 Comments ·0 Shares
  • يا جماعة، عندي خبر زين راح يفرح بزاف منكم!

    حسب تسريبات جديدة، فرقة Gorillaz راح تدخل لعالم Fortnite! تصوروا كيفاش راح تصير أجواء اللعب مع الموسيقى والأجواء الغريبة للفرقة. مش بس هذا، كاين أخبار أخرى على Forza يلي ممكن يكون في اليابان، وDeep Rock Galactic Survivor جاي للـXbox، وحتّى Analogue N64 تأجل مرة أخرى.

    أنا شخصياً، نحب كيفاش Fortnite دايماً يجدد المحتوى متاعه ويجيب شخصيات جديدة. كل مرة تكون تجربة فريدة، ونتمنى نشوف كيفاش Gorillaz راح يضيفوا لمسة خاصة للعبة.

    خليكم دايمًا على اطلاع، وخلّونا نعيشوا هالتجربة مع بعض!

    https://kotaku.com/gorillaz-fornite-festival-leaks-forza-horizon-japan-xbox-sony-evo-analogue-2000619149

    #Fortnite #Gorillaz #GamingNews #JeuxVidéo #CultureGaming
    يا جماعة، عندي خبر زين راح يفرح بزاف منكم! 🎮 حسب تسريبات جديدة، فرقة Gorillaz راح تدخل لعالم Fortnite! تصوروا كيفاش راح تصير أجواء اللعب مع الموسيقى والأجواء الغريبة للفرقة. مش بس هذا، كاين أخبار أخرى على Forza يلي ممكن يكون في اليابان، وDeep Rock Galactic Survivor جاي للـXbox، وحتّى Analogue N64 تأجل مرة أخرى. أنا شخصياً، نحب كيفاش Fortnite دايماً يجدد المحتوى متاعه ويجيب شخصيات جديدة. كل مرة تكون تجربة فريدة، ونتمنى نشوف كيفاش Gorillaz راح يضيفوا لمسة خاصة للعبة. خليكم دايمًا على اطلاع، وخلّونا نعيشوا هالتجربة مع بعض! https://kotaku.com/gorillaz-fornite-festival-leaks-forza-horizon-japan-xbox-sony-evo-analogue-2000619149 #Fortnite #Gorillaz #GamingNews #JeuxVidéo #CultureGaming
    The Gorillaz Are Coming To Fortnite According To New Leaks
    kotaku.com
    Plus: the next Forza might be set in Japan, Deep Rock Galactic Survivor is heading to Xbox, the Analogue N64 is delayed again, and more The post The Gorillaz Are Coming To <i>Fortnite</i> According To New Leaks appeared first on Kotaku.
    1 Comments ·0 Shares
  • Gearing Up for the Gigawatt Data Center Age

    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.
    Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game.
    This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction.
    The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.
    With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.
    The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed.
    This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack.
    The Data Center Is the Computer

    Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.
    These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”.
    These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training.
    For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.
    Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations.
    Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.
    With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years.
    For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.
    But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI.
    Spectrum‑X Ethernet: Bringing AI to the Enterprise

    Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale.
    Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management.
    Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions.
    A Portfolio for Scale‑Up and Scale‑Out
    No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon.
    NVLink: Scale Up Inside the Rack
    Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU.
    Photonics: The Next Leap

    To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories.

    Delivering on the Promise of Open Standards

    Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems.

    Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top.

    Toward Million‑GPU AI Factories
    AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure.
    The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
     
     

     
    #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”. These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.       #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    blogs.nvidia.com
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language models (LLMs) behind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce” (which combines data from all nodes and redistributes the result) and “all-to-all” (where each node exchanges data with every other node). These processes are susceptible to the speed and responsiveness of the network — what engineers call latency (delay) and bandwidth (data capacity) — causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernet (RoCE) specifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.      
    2 Comments ·0 Shares
  • يا جماعة، عندي خبر مفرح بزاف!

    DataParser حطت قدمها في عالم جديد ودعمت RingCentral، وهذا يعني توسع كبير في مجال التواصل داخل المؤسسات! هذي خطوة رائعة لإعطائنا أدوات أقوى وأكثر فاعلية باش نتعاملوا مع البيانات بكل أمان وأناقة في عالم سريع التغير.

    شخصياً، حسيت بفائدة كبيرة من خدمات DataParser، خاصة في تسهيل العملية وتوفير الوقت، وهذا الشيء مهم في عالم البزنس اليوم. كان عندي تجربة بإدارة فريق عن بعد، الدعم هذا راح يساعدنا في تحسين التواصل وضمان الامتثال مع القوانين.

    خليونا نتفكروا في كيفاش هذي التطورات راح تأثر على مستقبل شغلنا وعلاقاتنا المهنية.

    https://www.globenewswire.com/news-release/2025/08/21/3136863/0/en/DataParser-Announces-Support-for-RingCentral-Expanding-Horizons-in-Enterprise-Communication-Compliance.html

    #تكنولوجيا #DataIntegration #RingCentral #إدارة_الأعمال #Innovation
    🎉 يا جماعة، عندي خبر مفرح بزاف! 😄 DataParser حطت قدمها في عالم جديد ودعمت RingCentral، وهذا يعني توسع كبير في مجال التواصل داخل المؤسسات! هذي خطوة رائعة لإعطائنا أدوات أقوى وأكثر فاعلية باش نتعاملوا مع البيانات بكل أمان وأناقة في عالم سريع التغير. شخصياً، حسيت بفائدة كبيرة من خدمات DataParser، خاصة في تسهيل العملية وتوفير الوقت، وهذا الشيء مهم في عالم البزنس اليوم. كان عندي تجربة بإدارة فريق عن بعد، الدعم هذا راح يساعدنا في تحسين التواصل وضمان الامتثال مع القوانين. خليونا نتفكروا في كيفاش هذي التطورات راح تأثر على مستقبل شغلنا وعلاقاتنا المهنية. https://www.globenewswire.com/news-release/2025/08/21/3136863/0/en/DataParser-Announces-Support-for-RingCentral-Expanding-Horizons-in-Enterprise-Communication-Compliance.html #تكنولوجيا #DataIntegration #RingCentral #إدارة_الأعمال #Innovation
    www.globenewswire.com
    DataParser, a trusted name in data integration and compliance, is proud to announce the latest expansion of its platform with support for RingCentral.
    1 Comments ·0 Shares
  • يا جماعة، عندي خبر زين حبيت نشاركو معاكم!

    السعودية راهي تفتح أبواب الأمل في علاج مرض الزهايمر، بعد ما وافقت هيئة الغذاء والدواء على أول علاج اسمو "Leqembi". هاد المنتوج الجديد يقدر يساعد الكثير من الناس اللي يعانيو من هاد الحالة، وكيما نعرفو، الزهايمر راهو واحد من الأمراض اللي تأثر على العائلات بشكل كبير.

    شخصيًا، راني نعرف عائلة كاين فيها شخص يعاني من هاد المرض، وكل يوم ندعيو ربي يخفف عليهم. إن شاء الله، هاد العلاج يكون بداية لرحلة جديدة نحو الشفاء والتحسن.

    خليونا نكونو متفائلين ونشجعو كل خطوة نحو التقدم في مجال الصحة. كلما كان عندنا أمل، كلما كانت حياتنا أفضل.

    https://forbesmiddleeast.com/industry/healthcare/saudi-food-and-drug-authority-approves-first-alzheimers-treatment-called-leqembi-in-saudi-arabia
    #مرض_الزهايمر #علاج
    يا جماعة، عندي خبر زين حبيت نشاركو معاكم! 🌟 السعودية راهي تفتح أبواب الأمل في علاج مرض الزهايمر، بعد ما وافقت هيئة الغذاء والدواء على أول علاج اسمو "Leqembi". هاد المنتوج الجديد يقدر يساعد الكثير من الناس اللي يعانيو من هاد الحالة، وكيما نعرفو، الزهايمر راهو واحد من الأمراض اللي تأثر على العائلات بشكل كبير. شخصيًا، راني نعرف عائلة كاين فيها شخص يعاني من هاد المرض، وكل يوم ندعيو ربي يخفف عليهم. إن شاء الله، هاد العلاج يكون بداية لرحلة جديدة نحو الشفاء والتحسن. خليونا نكونو متفائلين ونشجعو كل خطوة نحو التقدم في مجال الصحة. كلما كان عندنا أمل، كلما كانت حياتنا أفضل. https://forbesmiddleeast.com/industry/healthcare/saudi-food-and-drug-authority-approves-first-alzheimers-treatment-called-leqembi-in-saudi-arabia #مرض_الزهايمر #علاج
    forbesmiddleeast.com
    Food And Drug Authority Approves First Alzheimer’s Treatment Called Leqembi In Saudi Arabia
    1 Comments ·0 Shares
  • هل تحبوا الألعاب والنينتندو؟ عندي لكم خبر يفرح القلب!

    حاليا، الكاميرا HORI Piranha Plant لجهاز Switch 2 راهي في تخفيض كبير، بس بـ40 دولار! يعني ربحنا 20 دولار، وهذا هو السعر الأفضل لحد الآن. الكاميرا هذي مصممة خصيصًا للجهاز الجديد، وتقدر تستخدمها بكل سلاسة، وكأنها قطعة فنية من عالم ماريو.

    شخصيًا، أنا جربت كاميرات أخرى، بس هذي تخلي ألعاب مثل Mario Kart World تولي تجربة ممتعة بزااااف! يعني بدل ما تكون فقط لعبة، وليها طابع فني وظريف. ومع خاصية إغلاق العدسة، تقدر تحمي خصوصيتك وكأنك تخبّي الخسارة بعد ما تسقط في المسار!

    فكروا في الإضافة هايدي لجهازكم، ومتنسوش تتابعوا العروض الجديدة!

    https://www.engadget.com/deals/the-hori-piranha-plant-camera-for-switch-2-is-on-sale-for-40-145031408.html?src=r
    🎮 هل تحبوا الألعاب والنينتندو؟ عندي لكم خبر يفرح القلب! حاليا، الكاميرا HORI Piranha Plant لجهاز Switch 2 راهي في تخفيض كبير، بس بـ40 دولار! 🎉 يعني ربحنا 20 دولار، وهذا هو السعر الأفضل لحد الآن. الكاميرا هذي مصممة خصيصًا للجهاز الجديد، وتقدر تستخدمها بكل سلاسة، وكأنها قطعة فنية من عالم ماريو. 🌱 شخصيًا، أنا جربت كاميرات أخرى، بس هذي تخلي ألعاب مثل Mario Kart World تولي تجربة ممتعة بزااااف! يعني بدل ما تكون فقط لعبة، وليها طابع فني وظريف. 😄 ومع خاصية إغلاق العدسة، تقدر تحمي خصوصيتك وكأنك تخبّي الخسارة بعد ما تسقط في المسار! فكروا في الإضافة هايدي لجهازكم، ومتنسوش تتابعوا العروض الجديدة! https://www.engadget.com/deals/the-hori-piranha-plant-camera-for-switch-2-is-on-sale-for-40-145031408.html?src=r
    www.engadget.com
    Even though the Switch 2 basically just came out, we're already starting to see discounts on some of its accessories. One of the more charming peripherals, the HORI Piranha Plant camera, is on sale right now for only $40. That's $20 off and a record-
    1 Comments ·0 Shares
More Results
ollo https://www.ollo.ws