• يا جماعة، تعرفوا شنو الوقت لي جاي؟ TechCrunch Disrupt 2025 جابت لنا موعد لا يُفوّت!

    الـ Builders Stage هو المكان وين المؤسسين، المشغلين، والمستثمرين يجتمعوا ويدردشوا على كيفاش يحوّلو الأفكار البسيطة إلى بزنس ناجح. الفكرة هنا أنو كل واحد يشارك تجربتو ورؤيتو، وهذا لي يخلي النقاشات أكثر فعالية وحماس.

    أنا شخصياً، حسيت بلي كاين طاقة إيجابية كبيرة في مثل هاد الفعاليات. كي نشوف الناس يحققوا مشاريعهم، نخاف نكون حبيس أفكاري. لازم الكل يشارك، ونبني مع بعض مجتمع داعم.

    ما تنساوش، التجربة هي لي تعلّم و تفيد.

    https://techcrunch.com/2025/09/03/techcrunch-disrupt-2025-adds-new-leading-voices-to-the-builders-stage-agenda/

    #بزنس #TechCrunch #تكنولوجيا #Innovations #Entrepreneurs
    🚀 يا جماعة، تعرفوا شنو الوقت لي جاي؟ TechCrunch Disrupt 2025 جابت لنا موعد لا يُفوّت! الـ Builders Stage هو المكان وين المؤسسين، المشغلين، والمستثمرين يجتمعوا ويدردشوا على كيفاش يحوّلو الأفكار البسيطة إلى بزنس ناجح. الفكرة هنا أنو كل واحد يشارك تجربتو ورؤيتو، وهذا لي يخلي النقاشات أكثر فعالية وحماس. أنا شخصياً، حسيت بلي كاين طاقة إيجابية كبيرة في مثل هاد الفعاليات. كي نشوف الناس يحققوا مشاريعهم، نخاف نكون حبيس أفكاري. لازم الكل يشارك، ونبني مع بعض مجتمع داعم. ما تنساوش، التجربة هي لي تعلّم و تفيد. https://techcrunch.com/2025/09/03/techcrunch-disrupt-2025-adds-new-leading-voices-to-the-builders-stage-agenda/ #بزنس #TechCrunch #تكنولوجيا #Innovations #Entrepreneurs
    techcrunch.com
    The Builders Stage at TechCrunch Disrupt 2025 is where founders, operators, and investors get real about what it takes to turn an idea into a business that works.
    Like
    Love
    Wow
    Angry
    Sad
    553
    · 1 Commentaires ·0 Parts
  • سلام يا جماعة!

    حبّيت نشارك معاكم خبر مهم يخص المستثمرين في XPLR Infrastructure، المعروفين سابقاً بـ Nextera Energy Partners. إذا كنت من الناس اللي شراو وحداتهم بين سبتمبر 2023 ويناير 2025، راهو الوقت تسرع! عندكم deadline في 8 سبتمبر 2025 باش تحصلوا على النصيحة القانونية قبل ما تتخذوا أي قرار.

    شفت كيفاش القضايا المالية تقدر تأثر على المستثمرين، و شخصياً كان عندي تجربة مشابهة خلتني أكثر وعيًا. التعاون مع محامي متمكن يقدر يفرّق بين النجاح والفشل، خاصة في عالم الاستثمارات هذا.

    المسؤولية على عاتقكم، خليكم دايماً مستعدين!

    https://www.globenewswire.com/news-release/2025/08/26/3139551/673/en/ROSEN-A-LEADING-LAW-FIRM-Encourages-XPLR-Infrastructure-LP-f-k-a-Nextera-Energy-Partners-LP-Investors-to-Secure-Counsel-Before-Important-Deadline-in-Securities-Class-
    🚀 سلام يا جماعة! حبّيت نشارك معاكم خبر مهم يخص المستثمرين في XPLR Infrastructure، المعروفين سابقاً بـ Nextera Energy Partners. إذا كنت من الناس اللي شراو وحداتهم بين سبتمبر 2023 ويناير 2025، راهو الوقت تسرع! عندكم deadline في 8 سبتمبر 2025 باش تحصلوا على النصيحة القانونية قبل ما تتخذوا أي قرار. شفت كيفاش القضايا المالية تقدر تأثر على المستثمرين، و شخصياً كان عندي تجربة مشابهة خلتني أكثر وعيًا. التعاون مع محامي متمكن يقدر يفرّق بين النجاح والفشل، خاصة في عالم الاستثمارات هذا. المسؤولية على عاتقكم، خليكم دايماً مستعدين! https://www.globenewswire.com/news-release/2025/08/26/3139551/673/en/ROSEN-A-LEADING-LAW-FIRM-Encourages-XPLR-Infrastructure-LP-f-k-a-Nextera-Energy-Partners-LP-Investors-to-Secure-Counsel-Before-Important-Deadline-in-Securities-Class-
    www.globenewswire.com
    NEW YORK, Aug. 26, 2025 (GLOBE NEWSWIRE) -- WHY: New York, N.Y., August 26, 2025. Rosen Law Firm, a global investor rights law firm, reminds purchasers of common units of XPLR Infrastructure, LP f/k/a Nextera Energy Partners, LP (NYSE: XIFR, NEP)
    Like
    Love
    Wow
    Angry
    Sad
    714
    · 1 Commentaires ·0 Parts
  • NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry.
    Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device.
    This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics.

    Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments.
    “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.”
    Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device.
    Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models.
    A Giant Leap for Real-Time Robot Reasoning
    Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency.
    Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally.
    NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization.
    With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases.
    Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing.
    With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams.
    Jetson Thor Set to Advance Research Innovation 
    Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications.
    At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue.
    “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”
    Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets.
    Wield the Strength of Jetson Thor
    The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply.
    NVIDIA Jetson AGX Thor Developer Kit
    The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors.
    Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency.
    Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio.
    More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough.

    To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face.
    The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners.
    NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc., e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at NVIDIA Jetson T5000 modules are available starting at for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September. #nvidia #jetson #thor #unlocks #realtime
    NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
    blogs.nvidia.com
    Robots around the world are about to get a lot smarter as physical AI developers plug in NVIDIA Jetson Thor modules — new robotics computers that can serve as the brains for robotic systems across research and industry. Robots demand rich sensor data and low-latency AI processing. Running real-time robotic applications requires significant AI compute and memory to handle concurrent data streams from multiple sensors. Jetson Thor, now in general availability, delivers 7.5x more AI compute, 3.1x more CPU performance and 2x more memory than its predecessor, the NVIDIA Jetson Orin, to make this possible on device. This performance leap will enable roboticists to process high-speed sensor data and perform visual reasoning at the edge — workflows that were previously too slow to run in dynamic real-world environments. This opens new possibilities for multimodal AI applications such as humanoid robotics. Agility Robotics, a leader in humanoid robotics, has integrated NVIDIA Jetson into the fifth generation of its robot, Digit — and plans to adopt Jetson Thor as the onboard compute platform for the sixth generation of Digit. This transition will enhance Digit’s real-time perception and decision-making capabilities, supporting increasingly complex AI skills and behaviors. Digit is commercially deployed and performs logistics tasks such as stacking, loading and palletizing in warehouse and manufacturing environments. “The powerful edge processing offered by Jetson Thor will take Digit to the next level — enhancing its real-time responsiveness and expanding its abilities to a broader, more complex set of skills,” said Peggy Johnson, CEO of Agility Robotics. “With Jetson Thor, we can deliver the latest physical AI advancements to optimize operations across our customers’ warehouses and factories.” Boston Dynamics — which has been building some of the industry’s most advanced robots for over 30 years — is integrating Jetson Thor into its humanoid robot Atlas, enabling Atlas to harness formerly server-level compute, AI workload acceleration, high-bandwidth data processing and significant memory on device. Beyond humanoids, Jetson Thor will accelerate various robotic applications — such as surgical assistants, smart tractors, delivery robots, industrial manipulators and visual AI agents — with real-time inference on device for larger, more complex AI models. A Giant Leap for Real-Time Robot Reasoning Jetson Thor is built for generative reasoning models. It enables the next generation of physical AI agents — powered by large transformer models, vision language models and vision language action models — to run in real time at the edge while minimizing cloud dependency. Optimized with the Jetson software stack to enable the low latency and high performance required in real-world applications, Jetson Thor supports all popular generative AI frameworks and AI reasoning models with unmatched real-time performance. These include Cosmos Reason, DeepSeek, Llama, Gemini and Qwen models, as well as domain-specific models for robotics like Isaac GR00T N1.5, enabling any developer to easily experiment and run inference locally. NVIDIA Jetson Thor opens new capabilities for real-time reasoning with multi-sensor input. Further performance improvement is expected with FP4 and speculative decoding optimization. With NVIDIA CUDA ecosystem support through its lifecycle, Jetson Thor is expected to deliver even better throughput and faster responses with future software releases. Jetson Thor modules also run the full NVIDIA AI software stack to accelerate virtually every physical AI workflow with platforms including NVIDIA Isaac for robotics, NVIDIA Metropolis for video analytics AI agents and NVIDIA Holoscan for sensor processing. With these software tools, developers can easily build and deploy applications, such as visual AI agents that can analyze live camera streams to monitor worker safety, humanoid robots capable of manipulation tasks in unstructured environments and smart operating rooms that guide surgeons based on data from multi-camera streams. Jetson Thor Set to Advance Research Innovation  Research labs at Stanford University, Carnegie Mellon University and the University of Zurich are tapping Jetson Thor to push the boundaries of perception, planning and navigation models for a host of potential applications. At Carnegie Mellon’s Robotics Institute, a research team uses NVIDIA Jetson to power autonomous robots that can navigate complex, unstructured environments to conduct medical triage as well as search and rescue. “We can only do as much as the compute available allows,” said Sebastian Scherer, an associate research professor at the university and head of the AirLab. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.” Scherer anticipates that by upgrading from his team’s existing NVIDIA Jetson AGX Orin systems to Jetson AGX Thor developer kit, they’ll improve the performance of AI models including their award-winning MAC-VO model for robot perception at the edge, boost their sensor-fusion capabilities and be able to experiment with robot fleets. Wield the Strength of Jetson Thor The Jetson Thor family includes a developer kit and production modules. The developer kit includes a Jetson T5000 module, a reference carrier board with abundant connectivity, an active heatsink with a fan and a power supply. NVIDIA Jetson AGX Thor Developer Kit The Jetson ecosystem supports a variety of application requirements, high-speed industrial automation protocols and sensor interfaces, accelerating time to market for enterprise developers. Hardware partners including Advantech, Aetina, ConnectTech, MiiVii and TZTEK are building production-ready Jetson Thor systems with flexible I/O and custom configurations in various form factors. Sensor and Actuator companies including Analog Devices, Inc. (ADI), e-con Systems,  Infineon, Leopard Imaging, RealSense and Sensing are using NVIDIA Holoscan Sensor Bridge — a platform that simplifies sensor fusion and data streaming — to connect sensor data from cameras, radar, lidar and more directly to GPU memory on Jetson Thor with ultralow latency. Thousands of software companies can now elevate their traditional vision AI and robotics applications with multi-AI agent workflows running on Jetson Thor. Leading adopters include Openzeka, Rebotnix, Solomon and Vaidio. More than 2 million developers use NVIDIA technologies to accelerate robotics workflows. Get started with Jetson Thor by reading the NVIDIA Technical Blog and watching the developer kit walkthrough. To get hands-on experience with Jetson Thor, sign up to participate in upcoming hackathons with Seeed Studio and LeRobot by Hugging Face. The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. NVIDIA Jetson T5000 modules are available starting at $2,999 for 1,000 units. Buy now from authorized NVIDIA partners. NVIDIA today also announced that the NVIDIA DRIVE AGX Thor developer kit, which provides a platform for developing autonomous vehicles and mobility solutions, is available for preorder. Deliveries are slated to start in September.
    Like
    Love
    Wow
    Sad
    Angry
    797
    · 2 Commentaires ·0 Parts
  • واش راكم عاينين؟ Silicon Valley بدات تصب في الملايين في PACs اللي تصب في صالح AI!

    المقال يتحدث عن شبكة جديدة اسمها Leading the Future، اللي قررت تستثمر في التبرعات الانتخابية والإعلانات الرقمية باش تدعم قوانين مريحة للذكاء الاصطناعي وتعارض المرشحين اللي يشوفوهم راح يعيقو الصناعة.

    شوفوا، الموضوع هذا يهم الكل فينا، خاصة اللي عندهم طموحات في التقنية والابتكار. كنتمو تخيلوا مستقبل بلا ذكاء اصطناعي؟ بالنسبة لي، الذكاء الاصطناعي هو المستقبل، وضروري تكون هناك قوانين واضحة تحميه وتعطيه فرصة يتطور.

    خلينا نكونو واعيين ونتبعو ما يحدث في الساحة السياسية، لأن القرارات اليوم راح تأثر علينا في الغد.

    https://techcrunch.com/2025/08/25/silicon-valley-is-pouring-millions-into-pro-ai-pacs-to-sway-midterms/
    #ذكاء_اصطناعي #AI #SiliconValley #تق
    📣 واش راكم عاينين؟ Silicon Valley بدات تصب في الملايين في PACs اللي تصب في صالح AI! المقال يتحدث عن شبكة جديدة اسمها Leading the Future، اللي قررت تستثمر في التبرعات الانتخابية والإعلانات الرقمية باش تدعم قوانين مريحة للذكاء الاصطناعي وتعارض المرشحين اللي يشوفوهم راح يعيقو الصناعة. شوفوا، الموضوع هذا يهم الكل فينا، خاصة اللي عندهم طموحات في التقنية والابتكار. كنتمو تخيلوا مستقبل بلا ذكاء اصطناعي؟ بالنسبة لي، الذكاء الاصطناعي هو المستقبل، وضروري تكون هناك قوانين واضحة تحميه وتعطيه فرصة يتطور. خلينا نكونو واعيين ونتبعو ما يحدث في الساحة السياسية، لأن القرارات اليوم راح تأثر علينا في الغد. https://techcrunch.com/2025/08/25/silicon-valley-is-pouring-millions-into-pro-ai-pacs-to-sway-midterms/ #ذكاء_اصطناعي #AI #SiliconValley #تق
    techcrunch.com
    The new pro-AI super-PAC network dubbed Leading the Future aims to use campaign donations and digital ads to advocate for favorable AI regulation and oppose candidates that the group thinks will stifle the industry.
    Like
    Love
    Wow
    Sad
    Angry
    242
    · 1 Commentaires ·0 Parts
  • أنتم مستثمرين في Fiserv, Inc؟ عندي خبر مهم ليكم!

    المكتب القانوني "Rosen" يتذكركم بضرورة التحرك قبل الموعد النهائي اللي هو 22 سبتمبر 2025! إذا شريت أسهم Fiserv بين 24 يوليو 2024 و22 يوليو 2025، من المهم أنك تأمن مشورة قانونية قبل ما يفوتكم الوقت.

    في مثل هذه الحالات، من الضروري أن تكونوا على علم بحقوقكم وتستفيدوا من الوضع القانوني. شخصيا، في مرّة كنت نواجه مشكلة مشابهة وكانت الاستشارة القانونية هي المفتاح لحلها.

    خلينا نكون واعيين ونحمي حقوقنا كمستثمرين، كل واحد فينا يستحق الأفضل.

    https://www.globenewswire.com/news-release/2025/08/24/3138165/673/en/ROSEN-A-LEADING-LAW-FIRM-Encourages-Fiserv-Inc-Investors-to-Secure-Counsel-Before-Important-Deadline-in-Securities-Class-Action-FI.html

    #استثمار #فكر_قانوني #Fis
    🚨 أنتم مستثمرين في Fiserv, Inc؟ عندي خبر مهم ليكم! 💼 المكتب القانوني "Rosen" يتذكركم بضرورة التحرك قبل الموعد النهائي اللي هو 22 سبتمبر 2025! إذا شريت أسهم Fiserv بين 24 يوليو 2024 و22 يوليو 2025، من المهم أنك تأمن مشورة قانونية قبل ما يفوتكم الوقت. 💡 في مثل هذه الحالات، من الضروري أن تكونوا على علم بحقوقكم وتستفيدوا من الوضع القانوني. شخصيا، في مرّة كنت نواجه مشكلة مشابهة وكانت الاستشارة القانونية هي المفتاح لحلها. خلينا نكون واعيين ونحمي حقوقنا كمستثمرين، كل واحد فينا يستحق الأفضل. https://www.globenewswire.com/news-release/2025/08/24/3138165/673/en/ROSEN-A-LEADING-LAW-FIRM-Encourages-Fiserv-Inc-Investors-to-Secure-Counsel-Before-Important-Deadline-in-Securities-Class-Action-FI.html #استثمار #فكر_قانوني #Fis
    www.globenewswire.com
    NEW YORK, Aug. 24, 2025 (GLOBE NEWSWIRE) -- WHY: Rosen Law Firm, a global investor rights law firm, reminds purchasers of common stock of Fiserv, Inc. (NYSE: FI) between July 24, 2024 and July 22, 2025, both dates inclusive (the “Class Period”), o
    Like
    Love
    Wow
    Sad
    Angry
    706
    · 1 Commentaires ·0 Parts
  • كلنا نعرف كيف المعايدات الصحية مهمة، لكن هل فكرتو كيفاش ممكن تكون "أفضل مكملات دعم الهضم" سطا عالم الصحة في 2025؟

    المقال يتحدث على كيف DigestiStart تجاوبت مع هذا الطلب المتزايد من المستهلكين، اللي بدو يركزوا على الشفافية في المكونات والـ Clean Label. بمعنى آخر، الناس بدو يعرفوا بالضبط شنو كاين في اللي يشربوه!

    شخصيا، جربت العديد من المكملات، وفي بعض الأحيان كنت نحس بالقلق من المكونات الغامضة. لكن اليوم، مع الفوكس على الشفافية، كل واحد يقدر يختار ما يناسبه بلا شكوك.

    في النهاية، التفكير في ماذا نستهلك يمكن يغير صحتنا للأحسن، والحمد لله اليوم عندنا خيارات أفضل وأكثر شفافية.

    https://www.globenewswire.com/news-release/2025/08/23/3138137/0/en/DigestiStart-Responds-to-2025-Surge-in-Leading-Digestion-Support-Supplement-
    💡 كلنا نعرف كيف المعايدات الصحية مهمة، لكن هل فكرتو كيفاش ممكن تكون "أفضل مكملات دعم الهضم" سطا عالم الصحة في 2025؟ 😲 المقال يتحدث على كيف DigestiStart تجاوبت مع هذا الطلب المتزايد من المستهلكين، اللي بدو يركزوا على الشفافية في المكونات والـ Clean Label. بمعنى آخر، الناس بدو يعرفوا بالضبط شنو كاين في اللي يشربوه! 👌 شخصيا، جربت العديد من المكملات، وفي بعض الأحيان كنت نحس بالقلق من المكونات الغامضة. لكن اليوم، مع الفوكس على الشفافية، كل واحد يقدر يختار ما يناسبه بلا شكوك. 💪 في النهاية، التفكير في ماذا نستهلك يمكن يغير صحتنا للأحسن، والحمد لله اليوم عندنا خيارات أفضل وأكثر شفافية. https://www.globenewswire.com/news-release/2025/08/23/3138137/0/en/DigestiStart-Responds-to-2025-Surge-in-Leading-Digestion-Support-Supplement-
    www.globenewswire.com
    Why ‘Best Digestion Support Supplement’ Has Become a Trending Search Term in 2025 Wellness Conversations Why ‘Best Digestion Support Supplement’ Has Become a Trending Search Term in 2025 Wellness Conversations
    Like
    Love
    Wow
    Sad
    Angry
    1KB
    · 1 Commentaires ·0 Parts
  • Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA

    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference.
    A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market.
    At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers.
    In addition, NVIDIA experts will present at four sessions and one tutorial detailing how:

    NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale.Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities.Co-packaged opticsswitches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories.The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer.It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale.
    NVIDIA Networking Fosters AI Innovation at Scale
    AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently.
    In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit.
    NVIDIA ConnectX-8 SuperNIC
    Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale.
    As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange.
    NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence.
    Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet.
    At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk.
    NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads.
    An NVIDIA rack-scale system.
    Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance.
    NVIDIA Blackwell and CUDA Bring AI to Millions of Developers
    The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology.
    NVIDIA GeForce RTX 5090 GPU
    It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects.
    NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere.
    Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon.
    From Algorithms to AI Supercomputers — Optimized for LLMs
    NVIDIA DGX Spark
    Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries.
    As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models. Learn more about NVFP4 in this NVIDIA Technical Blog.
    Open-Source Collaborations Propel Inference Innovation
    NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows.
    Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others.
    Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure.
    Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.
     
    #hot #topics #chips #inference #networking
    Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA
    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference. A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market. At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers. In addition, NVIDIA experts will present at four sessions and one tutorial detailing how: NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale.Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities.Co-packaged opticsswitches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories.The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer.It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale. NVIDIA Networking Fosters AI Innovation at Scale AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently. In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit. NVIDIA ConnectX-8 SuperNIC Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale. As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange. NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence. Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet. At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk. NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads. An NVIDIA rack-scale system. Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance. NVIDIA Blackwell and CUDA Bring AI to Millions of Developers The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology. NVIDIA GeForce RTX 5090 GPU It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects. NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere. Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon. From Algorithms to AI Supercomputers — Optimized for LLMs NVIDIA DGX Spark Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries. As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models. Learn more about NVFP4 in this NVIDIA Technical Blog. Open-Source Collaborations Propel Inference Innovation NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows. Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others. Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure. Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.   #hot #topics #chips #inference #networking
    Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA
    blogs.nvidia.com
    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference. A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market. At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers. In addition, NVIDIA experts will present at four sessions and one tutorial detailing how: NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale. (Featuring Idan Burstein, principal architect of network adapters and systems-on-a-chip at NVIDIA) Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities. (Featuring Marc Blackstein, senior director of architecture at NVIDIA) Co-packaged optics (CPO) switches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories. (Featuring Gilad Shainer, senior vice president of networking at NVIDIA) The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer. (Featuring Andi Skende, senior distinguished engineer at NVIDIA) It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale. NVIDIA Networking Fosters AI Innovation at Scale AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently. In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit. NVIDIA ConnectX-8 SuperNIC Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale. As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange. NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence. Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet. At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk. NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads. An NVIDIA rack-scale system. Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance. NVIDIA Blackwell and CUDA Bring AI to Millions of Developers The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology. NVIDIA GeForce RTX 5090 GPU It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects. NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere. Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon. From Algorithms to AI Supercomputers — Optimized for LLMs NVIDIA DGX Spark Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries. As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models (LLMs). Learn more about NVFP4 in this NVIDIA Technical Blog. Open-Source Collaborations Propel Inference Innovation NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows. Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others. Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure. Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.  
    Like
    Love
    Wow
    Angry
    Sad
    332
    · 2 Commentaires ·0 Parts
  • واش راكم يا جماعة؟ اليوم حبيت نهدرلكم على موضوع يخص الموضة السريعة و الادعاءات الخضراء. إيطاليا قررت تفرض غرامة على "شي إن" بمبلغ 1.1 مليون دولار بسبب ادعاءاتها المضللة حول البيئة، وكل هذا في كلاش مع عالم الموضة السريعة. هذي خطوة مهمة، بصح واش تعني فعلاً بالنسبة لنا كمستهلكين؟

    شخصياً، نحب نكون واعي بمشترياتي، ونحس بلي من الضروري نختار براندات تحترم البيئة، بصح راهم بزاف اللي يستغلو هذي الفكرة. كيفاش نقدروا نفرقوا بين الحقيقي والمزيف؟

    خليونا نفكروا مع بعض في الخيارات اللي نقوموا بها وشنو دورنا كأفراد في دعم البيئة.

    https://forbesmiddleeast.com/consumer/retail/italy-fines-sheins-european-web-operator-$11m-over-misleading-green-claims-in-fast-fashion-push
    #موضة #بيئة #Fashion #SustainableFashion #شي_إن
    واش راكم يا جماعة؟ اليوم حبيت نهدرلكم على موضوع يخص الموضة السريعة و الادعاءات الخضراء. إيطاليا قررت تفرض غرامة على "شي إن" بمبلغ 1.1 مليون دولار بسبب ادعاءاتها المضللة حول البيئة، وكل هذا في كلاش مع عالم الموضة السريعة. هذي خطوة مهمة، بصح واش تعني فعلاً بالنسبة لنا كمستهلكين؟ شخصياً، نحب نكون واعي بمشترياتي، ونحس بلي من الضروري نختار براندات تحترم البيئة، بصح راهم بزاف اللي يستغلو هذي الفكرة. كيفاش نقدروا نفرقوا بين الحقيقي والمزيف؟ خليونا نفكروا مع بعض في الخيارات اللي نقوموا بها وشنو دورنا كأفراد في دعم البيئة. https://forbesmiddleeast.com/consumer/retail/italy-fines-sheins-european-web-operator-$11m-over-misleading-green-claims-in-fast-fashion-push #موضة #بيئة #Fashion #SustainableFashion #شي_إن
    forbesmiddleeast.com
    Italy Fines Shein's European Web Operator $1.1M Over Misleading Green Claims In Fast Fashion Push
    Like
    Love
    Wow
    Sad
    Angry
    345
    · 1 Commentaires ·0 Parts
  • RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer

    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs.
    At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku.
    Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing.
    More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe.
    The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects.
    Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities.
    Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade.
    Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure.
    FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry.
    Innovations pioneered on FugakuNEXT could become blueprints for the world.
    What’s Inside
    FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads.
    It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture.
    The system will be built for speed, scale and efficiency.
    What It Will Do
    FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation.

    Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks.
    Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before.
    Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more.

    RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization.
    FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future.
    Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide.
    It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership.
    Image courtesy of RIKEN
    #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT, it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN #riken #japans #leading #science #institute
    RIKEN, Japan’s Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer
    blogs.nvidia.com
    Japan is once again building a landmark high-performance computing system — not simply by chasing speed, but by rethinking how technology can best serve the nation’s most urgent scientific needs. At the FugakuNEXT International Initiative Launch Ceremony held in Tokyo on Aug. 22, leaders from RIKEN, Japan’s top research institute, announced the start of an international collaboration with Fujitsu and NVIDIA to co-design FugakuNEXT, the successor to the world-renowned supercomputer, Fugaku. Awarded early in the process, the contract enables the partners to work side by side in shaping the system’s architecture to address Japan’s most critical research priorities — from earth systems modeling and disaster resilience to drug discovery and advanced manufacturing. More than an upgrade, the effort will highlight Japan’s embrace of modern AI and showcase Japanese innovations that can be harnessed by researchers and enterprises across the globe. The ceremony featured remarks from the initiative’s leaders, RIKEN President Makoto Gonokami and Satoshi Matsuoka, director of the RIKEN Center for Computational Science and one of Japan’s most respected high-performance computing architects. Fujitsu Chief Technology Officer Vivek Mahajan attended, emphasizing the company’s role in advancing Japan’s computing capabilities. Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, attended in person as well to discuss the collaborative design approach and how the resulting platform will serve as a foundation for innovation well into the next decade. Momentum has been building. When NVIDIA founder and CEO Jensen Huang touched down in Tokyo last year, he called on Japan to seize the moment — to put NVIDIA’s latest technologies to work building its own AI, on its own soil, with its own infrastructure. FugakuNEXT answers that call, drawing on NVIDIA’s whole software stack —  from NVIDIA CUDA-X libraries such as NVIDIA cuQuantum for quantum simulation, RAPIDS for data science, NVIDIA TensorRT for high-performance inference and NVIDIA NeMo for large language model development, to other domain-specific software development kits tailored for science and industry. Innovations pioneered on FugakuNEXT could become blueprints for the world. What’s Inside FugakuNEXT will be a hybrid AI-HPC system, combining simulation and AI workloads. It will feature FUJITSU-MONAKA-X CPUs, which can be paired with NVIDIA technologies using NVLink Fusion, new silicon enabling high-bandwidth connections between Fujitsu’s CPUs and NVIDIA’s architecture. The system will be built for speed, scale and efficiency. What It Will Do FugakuNEXT will support a wide range of applications — such as automating hypothesis generation, code creation and experiment simulation. Scientific research: Accelerating simulations with surrogate models and physics-informed neural networks. Manufacturing: Using AI to learn from simulations to generate efficient and aesthetically pleasing designs faster than ever before. Earth systems modeling: aiding disaster preparedness and prediction for earthquakes and severe weather, and more. RIKEN, Fujitsu and NVIDIA will collaborate on software developments, including tools for mixed-precision computing, continuous benchmarking, and performance optimization. FugakuNEXT isn’t just a technical upgrade — it’s a strategic investment in Japan’s future. Backed by Japan’s MEXT (Ministry of Education, Culture, Sports, Science and Technology), it will serve universities, government agencies, and industry partners nationwide. It marks the start of a new era in Japanese supercomputing — one built on sovereign infrastructure, global collaboration, and a commitment to scientific leadership. Image courtesy of RIKEN
    2 Commentaires ·0 Parts
  • أحيانا، الحياة تفرض علينا تحديات كبيرة، لكن كيف نواجهها؟ اليوم أود نتكلم على موضوع مهم لبعض المستثمرين في الصناديق المشتركة.

    Rosen Law Firm، واحد من أكبر المكاتب القانونية على الصعيد العالمي، يذكّر كل من استثمروا في صناديق "Western Asset" و تكبدوا خسائر تفوق الـ 100 ألف دولار، بأنهم في حاجة لتحصيل مشورة قانونية قبل الموعد النهائي في 5 سبتمبر 2025. التفاصيل تتعلق بفترة زمنية بين 1 جانفي 2021 و 31 أكتوبر 2023.

    بالنسبة لي، الموضوع هذا يعكس أهمية الحذر في الاستثمارات وضرورة البحث عن النصيحة قبل اتخاذ أي قرار. كل واحد فينا عايش أيام صعبة، لكن من المهم أن نكون مستعدين لمواجهة التحديات.

    لازم نفكر في الخيارات المتاحة ونكون واعيين بحقوقنا.

    https://www.globenewswire.com/news-release/2025/08/22/3137626/673/en/WAMCO-DEADLINE-NOTICE-ROSEN-A-GLOBAL-AND-LEADING-LAW
    أحيانا، الحياة تفرض علينا تحديات كبيرة، لكن كيف نواجهها؟ 🤔 اليوم أود نتكلم على موضوع مهم لبعض المستثمرين في الصناديق المشتركة. Rosen Law Firm، واحد من أكبر المكاتب القانونية على الصعيد العالمي، يذكّر كل من استثمروا في صناديق "Western Asset" و تكبدوا خسائر تفوق الـ 100 ألف دولار، بأنهم في حاجة لتحصيل مشورة قانونية قبل الموعد النهائي في 5 سبتمبر 2025. التفاصيل تتعلق بفترة زمنية بين 1 جانفي 2021 و 31 أكتوبر 2023. بالنسبة لي، الموضوع هذا يعكس أهمية الحذر في الاستثمارات وضرورة البحث عن النصيحة قبل اتخاذ أي قرار. كل واحد فينا عايش أيام صعبة، لكن من المهم أن نكون مستعدين لمواجهة التحديات. لازم نفكر في الخيارات المتاحة ونكون واعيين بحقوقنا. https://www.globenewswire.com/news-release/2025/08/22/3137626/673/en/WAMCO-DEADLINE-NOTICE-ROSEN-A-GLOBAL-AND-LEADING-LAW
    www.globenewswire.com
    NEW YORK, Aug. 21, 2025 (GLOBE NEWSWIRE) -- WHY: Rosen Law Firm, a global investor rights law firm, reminds purchasers of the “Western Asset US Core Bond Fund” mutual fund classes – Class I (ticker: “WATFX”), Class A (ticker: “WABAX”), Class C (ti
    Like
    Love
    Wow
    Angry
    Sad
    86
    · 1 Commentaires ·0 Parts
  • يا جماعة، عندي خبر مهم لناس اللي عندهم استثمارات في Petco!

    Rosen Law Firm، اللي معروف بأنه يدافع على حقوق المستثمرين، بذكركم بموعد مهم ماشي بعيد: 29 أوت 2025. إذا كنت من اللي تكبدوا خسائر أكثر من 100K، راهم يشجعوكم باش تأمنوا محامٍ قبل هاد التاريخ. الموضوع يخص الاستثمارات اللي كانت بين 14 جانفي 2021 و5 جوان 2025.

    شخصياً، كنت دائما نفضل أني نكون واعي بالحقوق ديالي كمستثمر، وهاد النوع من الأخبار يفكرني بأهمية اختيار المحامي الصح.

    ديرولكم في بالكم، الفرصة ما تتعوضش ووقتكم محدود!

    https://www.globenewswire.com/news-release/2025/08/22/3137629/673/en/PETCO-IMPORTANT-DEADLINE-ROSEN-LEADING-INVESTOR-COUNSEL-Encourages-Petco-Health-and-Wellness-Company-Inc-Investors-with-Losses-in-Excess-of-
    📢 يا جماعة، عندي خبر مهم لناس اللي عندهم استثمارات في Petco! 🐾 Rosen Law Firm، اللي معروف بأنه يدافع على حقوق المستثمرين، بذكركم بموعد مهم ماشي بعيد: 29 أوت 2025. إذا كنت من اللي تكبدوا خسائر أكثر من 100K، راهم يشجعوكم باش تأمنوا محامٍ قبل هاد التاريخ. الموضوع يخص الاستثمارات اللي كانت بين 14 جانفي 2021 و5 جوان 2025. شخصياً، كنت دائما نفضل أني نكون واعي بالحقوق ديالي كمستثمر، وهاد النوع من الأخبار يفكرني بأهمية اختيار المحامي الصح. ديرولكم في بالكم، الفرصة ما تتعوضش ووقتكم محدود! https://www.globenewswire.com/news-release/2025/08/22/3137629/673/en/PETCO-IMPORTANT-DEADLINE-ROSEN-LEADING-INVESTOR-COUNSEL-Encourages-Petco-Health-and-Wellness-Company-Inc-Investors-with-Losses-in-Excess-of-
    www.globenewswire.com
    NEW YORK, Aug. 21, 2025 (GLOBE NEWSWIRE) -- WHY: Rosen Law Firm, a global investor rights law firm, reminds purchasers of securities of Petco Health and Wellness Company, Inc. (NASDAQ: WOOF) between January 14, 2021 and June 5, 2025, both dates in
    Like
    Love
    Wow
    Angry
    Sad
    80
    · 1 Commentaires ·0 Parts
  • تخيلوا معايا لو قلكم بلي ستيف جوبس، أسطورة التكنولوجيا، ما وليش مليارير بفضل أبل، لكن بفضل بيكسار؟

    المقال يقول بلي ستيف جوبس دار استثمار بـ10 مليون دولار في بيكسار، هذي الصفقة اللي خلاته يولي ملياردير قبل ما يطلق الآيفون. يعني، النجاح ما يجيش غير من الكمبيوترات، ولكن من الإبداع والفن تاع الأفلام.

    شخصياً، دوك نتفكر في الأفلام الكلاسيكية اللي كنا نشوفوها في التسعينات وكيفش كانوا يعكسوا التطور في عالم الرسوم المتحركة. هذا يبين بلي الإبداع يقدر يفتح الأبواب لأشياء ما كناش نتوقعوها.

    أحيانا، لازم نفكروا خارج الصندوق ونجربوا حاجة جديدة، حتى لو كانت في مجالات بعيدة على تخصصاتنا.

    https://fortune.com/2025/08/21/apple-cofounder-ceo-steve-jobs-became-a-billionaire-not-selling-computers-but-leading-pixar-ipo-toy
    🎬 تخيلوا معايا لو قلكم بلي ستيف جوبس، أسطورة التكنولوجيا، ما وليش مليارير بفضل أبل، لكن بفضل بيكسار؟ 🤯 المقال يقول بلي ستيف جوبس دار استثمار بـ10 مليون دولار في بيكسار، هذي الصفقة اللي خلاته يولي ملياردير قبل ما يطلق الآيفون. يعني، النجاح ما يجيش غير من الكمبيوترات، ولكن من الإبداع والفن تاع الأفلام. شخصياً، دوك نتفكر في الأفلام الكلاسيكية اللي كنا نشوفوها في التسعينات وكيفش كانوا يعكسوا التطور في عالم الرسوم المتحركة. هذا يبين بلي الإبداع يقدر يفتح الأبواب لأشياء ما كناش نتوقعوها. أحيانا، لازم نفكروا خارج الصندوق ونجربوا حاجة جديدة، حتى لو كانت في مجالات بعيدة على تخصصاتنا. https://fortune.com/2025/08/21/apple-cofounder-ceo-steve-jobs-became-a-billionaire-not-selling-computers-but-leading-pixar-ipo-toy
    fortune.com
    Steve Jobs’ $10 million bet on Pixar turned him into a billionaire, long before the introduction of the iPhone.
    1 Commentaires ·0 Parts
Plus de résultats
ollo https://www.ollo.ws