• واش راكم يا جماعة؟ اليوم جبتلكم فيديو مميز على قناة AL24news! نيجيريا رح تستضيف الطبعة الخامسة سنة 2027، وهادا حاجة تفتح آفاق جديدة في القارة الإفريقية.

    الفيديو يعرض كل التفاصيل والتحديات اللي راح تواجهها نيجيريا في تنظيم هذا الحدث الكبير. ومن خلال التجارب السابقة، كل واحد منّا يقدر يستفيد كيفاه الدول يمكنها تنمية السياحة والاقتصاد عن طريق تنظيم الفعاليات العالمية.

    الموضوع هذا يخليني نفكر في كيفاش الجزائر تقدر تتعلم من هذه التجارب، ونستثمر في الفعاليات اللي تخلي البلاد في دائرة الضوء.

    متنسوش تشتركوا في القناة وتفعلوا الجرس باش يبقى عندكم كل جديد.

    https://www.youtube.com/watch?v=cgo3iDZ9jDU
    #نيجيريا #افريقيا #EventManagement #SustainableDevelopment #AL24news
    🚀 واش راكم يا جماعة؟ اليوم جبتلكم فيديو مميز على قناة AL24news! نيجيريا رح تستضيف الطبعة الخامسة سنة 2027، وهادا حاجة تفتح آفاق جديدة في القارة الإفريقية. 🇳🇬✨ الفيديو يعرض كل التفاصيل والتحديات اللي راح تواجهها نيجيريا في تنظيم هذا الحدث الكبير. ومن خلال التجارب السابقة، كل واحد منّا يقدر يستفيد كيفاه الدول يمكنها تنمية السياحة والاقتصاد عن طريق تنظيم الفعاليات العالمية. الموضوع هذا يخليني نفكر في كيفاش الجزائر تقدر تتعلم من هذه التجارب، ونستثمر في الفعاليات اللي تخلي البلاد في دائرة الضوء. متنسوش تشتركوا في القناة وتفعلوا الجرس 🔔 باش يبقى عندكم كل جديد. https://www.youtube.com/watch?v=cgo3iDZ9jDU #نيجيريا #افريقيا #EventManagement #SustainableDevelopment #AL24news
    Like
    Love
    Wow
    Angry
    Sad
    207
    · 1 Comments ·0 Shares
  • يا جماعة، اليوم حبيت نشارك معاكم مشروع مميز من AMASA Estudio، هو "حديقة مجتمع UH Infonavit Santa Fe".

    المشروع هذا جا في منطقة صعبة بزاف، تبنت على أرضية وعرة، يعني المباني في تراكيب مختلفة والطرق بينها مش ساهلة. لكن بالرغم من هذا، الفريق حاول يعالج المشاكل ويخلق فضاءات أفضل للسكان.

    شخصيا، كي شفت كيفاش يمكن الإبداع يتغلب على التحديات، حسيت بإلهام كبير! يخلي الواحد يفكر كيفاش يمكننا نطوّر مدننا، ونخليها مرتبطة أكثر مع بعضها.

    خذوا وقتكم للتفكير في هذا الموضوع، لأن البيئة المحيطة بينا تلعب دور كبير في حياتنا اليومية.

    https://www.archdaily.com/1033446/uh-infonavit-santa-fe-community-park-amasa-estudio

    #عمارة #UrbanDesign #Inclusion #مستقبل_المدن #SustainableArchitecture
    يا جماعة، اليوم حبيت نشارك معاكم مشروع مميز من AMASA Estudio، هو "حديقة مجتمع UH Infonavit Santa Fe". 🌳🏙️ المشروع هذا جا في منطقة صعبة بزاف، تبنت على أرضية وعرة، يعني المباني في تراكيب مختلفة والطرق بينها مش ساهلة. لكن بالرغم من هذا، الفريق حاول يعالج المشاكل ويخلق فضاءات أفضل للسكان. شخصيا، كي شفت كيفاش يمكن الإبداع يتغلب على التحديات، حسيت بإلهام كبير! يخلي الواحد يفكر كيفاش يمكننا نطوّر مدننا، ونخليها مرتبطة أكثر مع بعضها. خذوا وقتكم للتفكير في هذا الموضوع، لأن البيئة المحيطة بينا تلعب دور كبير في حياتنا اليومية. https://www.archdaily.com/1033446/uh-infonavit-santa-fe-community-park-amasa-estudio #عمارة #UrbanDesign #Inclusion #مستقبل_المدن #SustainableArchitecture
    www.archdaily.com
    Located at the western edge of Mexico City, the project developed by AMASA Estudio in the INFONAVIT Santa Fe Housing Unit faced one of the most complex contexts within the set of interventions carried out in 2024. This unit presents a partic
    Like
    Love
    Wow
    Sad
    Angry
    613
    · 1 Comments ·0 Shares
  • كفاش نعيشو تجربة فريدة مع الطبيعة؟

    اليوم رح نحكيو على منتجات الخيزران من "بامبو هات" وكيفاش تقدّر تغير ديكورنا بطريقة طبيعية ورائعة! الخيزران موش غير مادة، هو روح الطبيعة اللي تجيب لينا الدفء والجمال في بيوتنا. ومن خلال المقال، رح نكتشفو كيفاش هالمواد تعكس الاناقة والراحة، وشنو الفوائد اللي تمنحنا إياها.

    المجربات الأخيرة مع هاد المنتجات كانت زعما غير عادية! حسيت بالفرق في الجو العام للدار، وكأنا رجعنا للطبيعة.

    فكروا معانا في كيفاش نقدروا ندمجوا الطبيعة في حياتنا اليومية، كاين بزاف الأفكار والإلهام لنصنعو بيئة مريحة وجميلة!

    https://www.djelfa.info/vb/showthread.php?t=2339333&goto=newpost
    #خيزران #بامبو #طبيعة #ديكور #SustainableLiving
    🌿 كفاش نعيشو تجربة فريدة مع الطبيعة؟ 🤔 اليوم رح نحكيو على منتجات الخيزران من "بامبو هات" وكيفاش تقدّر تغير ديكورنا بطريقة طبيعية ورائعة! 🌱 الخيزران موش غير مادة، هو روح الطبيعة اللي تجيب لينا الدفء والجمال في بيوتنا. ومن خلال المقال، رح نكتشفو كيفاش هالمواد تعكس الاناقة والراحة، وشنو الفوائد اللي تمنحنا إياها. المجربات الأخيرة مع هاد المنتجات كانت زعما غير عادية! حسيت بالفرق في الجو العام للدار، وكأنا رجعنا للطبيعة. 😍💚 فكروا معانا في كيفاش نقدروا ندمجوا الطبيعة في حياتنا اليومية، كاين بزاف الأفكار والإلهام لنصنعو بيئة مريحة وجميلة! https://www.djelfa.info/vb/showthread.php?t=2339333&goto=newpost #خيزران #بامبو #طبيعة #ديكور #SustainableLiving
    www.djelfa.info
    [HEADING=2]تجربة فريدة من الطبيعة مع منتجات الخيزران من بامبو هات[/HEADING] في عالم الديكور...
    Like
    Love
    Wow
    Sad
    Angry
    826
    · 1 Comments ·0 Shares
  • يا جماعة، هل سمعتم آخر الأخبار عن Hyundai؟

    شركتهم مدّدت يدها مع "Uncaged Innovations" وراهم يخدموا على جلد نباتي يبان ويعطي نفس ريحة الجلد الحقيقي! الفكرة بسيطة لكنها ثورية، بديل للجلد التقليدي وبأثر بيئي أقل بكثير. يعني كيما نقولوا، نحبوا الكماليات لكن بلا ما نضروا البيئة، صح؟

    بصراحة، أنا متحمس بزاف لهذا النوع من الابتكار. كيفاش ممكن نعيشوا حياة عصرية وفي نفس الوقت نكونوا واعيين بالبيئة؟ هذا شيء يفرح القلب!

    خلونا نفكروا في المستقبل، وين كل واحد يقدر يساهم في الحفاظ على كوكبنا، ونستمتعوا بالأشياء اللي نحبها.

    https://techcrunch.com/2025/08/27/hyundai-is-working-with-a-startup-on-plant-based-leather-that-smells-like-the-real-thing/

    #بيئة #Innovation #Hyundai #جلد_نباتي #SustainableLiving
    يا جماعة، هل سمعتم آخر الأخبار عن Hyundai؟ 🚗✨ شركتهم مدّدت يدها مع "Uncaged Innovations" وراهم يخدموا على جلد نباتي يبان ويعطي نفس ريحة الجلد الحقيقي! 😍 الفكرة بسيطة لكنها ثورية، بديل للجلد التقليدي وبأثر بيئي أقل بكثير. يعني كيما نقولوا، نحبوا الكماليات لكن بلا ما نضروا البيئة، صح؟ بصراحة، أنا متحمس بزاف لهذا النوع من الابتكار. كيفاش ممكن نعيشوا حياة عصرية وفي نفس الوقت نكونوا واعيين بالبيئة؟ هذا شيء يفرح القلب! خلونا نفكروا في المستقبل، وين كل واحد يقدر يساهم في الحفاظ على كوكبنا، ونستمتعوا بالأشياء اللي نحبها. https://techcrunch.com/2025/08/27/hyundai-is-working-with-a-startup-on-plant-based-leather-that-smells-like-the-real-thing/ #بيئة #Innovation #Hyundai #جلد_نباتي #SustainableLiving
    techcrunch.com
    Uncaged Innovations is working with Hyundai to develop alternatives to leather with a fraction of the environmental impact.
    Like
    Love
    Wow
    Sad
    Angry
    999
    · 1 Comments ·0 Shares
  • السلام عليكم يا جماعة!

    فكرت فيكم اليوم وخصوصاً في عالم المكاتب والعمل. URBANICA Furniture جابو لنا حلول جديدة لفرش المكاتب تكون صديقة للبيئة ومرتاح وفي نفس الوقت. كاين تراجع كبير نحو "fast furniture" والأشياء اللي تتكسر بعد وقت قصير. معناها، عوض ما تشتري كراسي وطاولات تدوم شوية، ليش ما نختارو الجودة والاستدامة؟

    بصراحة، جربت كراسي مريحة ولقاها فعلاً تأثر إيجابي على تركيزي وراحتي في العمل. كيما يقول المثل "المريح هو اللي يفعّل الإبداع".

    خلينا نفكروا شوية في الخيارات المتاحة وندعمو الاستدامة، كل واحد منا يقدر يكون جزء من التغيير.

    https://www.globenewswire.com/news-release/2025/08/27/3139803/0/en/URBANICA-Highlights-Sustainable-Office-Furniture-as-Consumers-Turn-Away-From-Fast-Furniture-Trends.html

    #أثاث_مكتبي #SustainableFurniture
    🌱 السلام عليكم يا جماعة! فكرت فيكم اليوم وخصوصاً في عالم المكاتب والعمل. URBANICA Furniture جابو لنا حلول جديدة لفرش المكاتب تكون صديقة للبيئة ومرتاح وفي نفس الوقت. كاين تراجع كبير نحو "fast furniture" والأشياء اللي تتكسر بعد وقت قصير. معناها، عوض ما تشتري كراسي وطاولات تدوم شوية، ليش ما نختارو الجودة والاستدامة؟ بصراحة، جربت كراسي مريحة ولقاها فعلاً تأثر إيجابي على تركيزي وراحتي في العمل. كيما يقول المثل "المريح هو اللي يفعّل الإبداع". خلينا نفكروا شوية في الخيارات المتاحة وندعمو الاستدامة، كل واحد منا يقدر يكون جزء من التغيير. https://www.globenewswire.com/news-release/2025/08/27/3139803/0/en/URBANICA-Highlights-Sustainable-Office-Furniture-as-Consumers-Turn-Away-From-Fast-Furniture-Trends.html #أثاث_مكتبي #SustainableFurniture
    www.globenewswire.com
    URBANICA Furniture unveils sustainable ergonomic office solutions as workers move away from fast furniture trends.
    Like
    Love
    Wow
    Sad
    Angry
    1K
    · 1 Comments ·0 Shares
  • Developer Rec Room lays off 'about half' its staff

    Diego Argüello, Contributing Editor, News, GameDeveloper.comAugust 26, 20253 Min ReadImage via Rec RoomDeveloper Rec Room, the team behind the namesake user-generated contentdriven social game, has laid off "about half" its staff.Announced yesterday via the official site, CEO and co-founder Nick Fajt wrote that both he and CCO and co-founder Cameron Brown made the decision, which they called a "business necessity based on the financial trajectory of the company" that doesn't reflect on the individuals affected."This is not a reflection on the talent or dedication of those departing—we wish we could keep every one of them," reads the announcement. "I’m gonna say that again, to make it clear this isn’t just 'one of those things you say in a layoff message'. We TRULY wish we could keep every one of these people on the team. But we can't. This is a reflection of the tough reality we face as a business and the change needed to give Rec Room a chance to thrive in the years ahead."According to the post, the laid off workers will continue to be paid for the next three months, receive health benefits for the next six months, and have the option to keep their laptop or desktop computer. Rec Room didn't specify how many people were affected.'The writing on the wall became very clear'Back in December 2021, Rec Room raised million for its social platform, bringing the company's lifetime raised funds to around million. According to Brown, the team invested "heavily in creation tools across PC, VR, consoles, and mobile," but the reality "has been harsh." The CEO claims the mobile and console versions never got to the point where "those devices were good for building stuff." Some of the efforts to bridge the gap, including the Maker AI tool, frustrated the studio's "more impactful creators." Related:At the same time, the lower-powered devices still fostered "millions of pieces of content," which reportedly put a strain on the team that had to come up with procedures to review it all. "Making all this run across every device was a massive technical challenge and burden. While our most skilled creators optimized their content cleverly, most creators didn’t—couldn’t, really, because we didn’t provide them with the necessary tooling."Last month, Fajt announced that Rec Room hit a "record-breaking month" for UGC sales thanks to the creations from players, with creator token earnings from room and Watch store sales increasing 47 percent year-over-year."We deliberately started with a small group of creators as the Avatar Studio tool is still in the early stages," Fajt wrote at the time. "All of the early joiners helped us iron out the workflow and onboarding, providing feedback on how to improve our systems and processes. With creators already finding success, we’re ready to expand."Related:Today's announcement continues by saying that supporting the aforementioned scope stretched the team thin, and began to "dig a financial hole that was getting larger every day." The CEO says the studio has been stuck in an "uncomfortable middle ground" during the past few years, wondering whether to keep pushing the internal UGC vision while potentially increasing the frustration of players and the team, or scale back the vision by cutting the team in half."Both paths were painful," Brown wrote. "But ultimately we got to a point where it was clear that staying the course meant low growth, a high burn rate, and no clear path forward. In a word: Unsustainable. The writing on the wall became very clear."Looking forward, Brown says the team will focus on "empowering our very best creators" and "ensuring Rec Room is a great experience for our players.""For those leaving—you will always be part of the Rec Room story," Brown wrote as a closing note about the layoffs. "We thank you for everything, and wish you the best for your next chapter. For those staying—we know this sucks. We know this hurts. Thank you for pushing forward with us—we have hard work ahead, but with a new focus we believe strongly in the future we can build together."Related:Game Developer has reached out to Rec Room for clarification on the number of workers affected. about:Top StoriesLayoffs & Studio ClosuresAbout the AuthorDiego ArgüelloContributing Editor, News, GameDeveloper.comDiego Nicolás Argüello is a freelance journalist and critic from Argentina. Video games helped him to learn English, so now he covers them for places like The New York Times, NPR, Rolling Stone, and more. He also runs Into the Spine, a site dedicated to fostering and supporting new writers, and co-hosted Turnabout Breakdown, a podcast about the Ace Attorney series. He’s most likely playing a rhythm game as you read this.See more from Diego ArgüelloDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    #developer #rec #room #lays #off
    Developer Rec Room lays off 'about half' its staff
    Diego Argüello, Contributing Editor, News, GameDeveloper.comAugust 26, 20253 Min ReadImage via Rec RoomDeveloper Rec Room, the team behind the namesake user-generated contentdriven social game, has laid off "about half" its staff.Announced yesterday via the official site, CEO and co-founder Nick Fajt wrote that both he and CCO and co-founder Cameron Brown made the decision, which they called a "business necessity based on the financial trajectory of the company" that doesn't reflect on the individuals affected."This is not a reflection on the talent or dedication of those departing—we wish we could keep every one of them," reads the announcement. "I’m gonna say that again, to make it clear this isn’t just 'one of those things you say in a layoff message'. We TRULY wish we could keep every one of these people on the team. But we can't. This is a reflection of the tough reality we face as a business and the change needed to give Rec Room a chance to thrive in the years ahead."According to the post, the laid off workers will continue to be paid for the next three months, receive health benefits for the next six months, and have the option to keep their laptop or desktop computer. Rec Room didn't specify how many people were affected.'The writing on the wall became very clear'Back in December 2021, Rec Room raised million for its social platform, bringing the company's lifetime raised funds to around million. According to Brown, the team invested "heavily in creation tools across PC, VR, consoles, and mobile," but the reality "has been harsh." The CEO claims the mobile and console versions never got to the point where "those devices were good for building stuff." Some of the efforts to bridge the gap, including the Maker AI tool, frustrated the studio's "more impactful creators." Related:At the same time, the lower-powered devices still fostered "millions of pieces of content," which reportedly put a strain on the team that had to come up with procedures to review it all. "Making all this run across every device was a massive technical challenge and burden. While our most skilled creators optimized their content cleverly, most creators didn’t—couldn’t, really, because we didn’t provide them with the necessary tooling."Last month, Fajt announced that Rec Room hit a "record-breaking month" for UGC sales thanks to the creations from players, with creator token earnings from room and Watch store sales increasing 47 percent year-over-year."We deliberately started with a small group of creators as the Avatar Studio tool is still in the early stages," Fajt wrote at the time. "All of the early joiners helped us iron out the workflow and onboarding, providing feedback on how to improve our systems and processes. With creators already finding success, we’re ready to expand."Related:Today's announcement continues by saying that supporting the aforementioned scope stretched the team thin, and began to "dig a financial hole that was getting larger every day." The CEO says the studio has been stuck in an "uncomfortable middle ground" during the past few years, wondering whether to keep pushing the internal UGC vision while potentially increasing the frustration of players and the team, or scale back the vision by cutting the team in half."Both paths were painful," Brown wrote. "But ultimately we got to a point where it was clear that staying the course meant low growth, a high burn rate, and no clear path forward. In a word: Unsustainable. The writing on the wall became very clear."Looking forward, Brown says the team will focus on "empowering our very best creators" and "ensuring Rec Room is a great experience for our players.""For those leaving—you will always be part of the Rec Room story," Brown wrote as a closing note about the layoffs. "We thank you for everything, and wish you the best for your next chapter. For those staying—we know this sucks. We know this hurts. Thank you for pushing forward with us—we have hard work ahead, but with a new focus we believe strongly in the future we can build together."Related:Game Developer has reached out to Rec Room for clarification on the number of workers affected. about:Top StoriesLayoffs & Studio ClosuresAbout the AuthorDiego ArgüelloContributing Editor, News, GameDeveloper.comDiego Nicolás Argüello is a freelance journalist and critic from Argentina. Video games helped him to learn English, so now he covers them for places like The New York Times, NPR, Rolling Stone, and more. He also runs Into the Spine, a site dedicated to fostering and supporting new writers, and co-hosted Turnabout Breakdown, a podcast about the Ace Attorney series. He’s most likely playing a rhythm game as you read this.See more from Diego ArgüelloDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like #developer #rec #room #lays #off
    Developer Rec Room lays off 'about half' its staff
    www.gamedeveloper.com
    Diego Argüello, Contributing Editor, News, GameDeveloper.comAugust 26, 20253 Min ReadImage via Rec RoomDeveloper Rec Room, the team behind the namesake user-generated content (UGC) driven social game, has laid off "about half" its staff.Announced yesterday via the official site, CEO and co-founder Nick Fajt wrote that both he and CCO and co-founder Cameron Brown made the decision, which they called a "business necessity based on the financial trajectory of the company" that doesn't reflect on the individuals affected."This is not a reflection on the talent or dedication of those departing—we wish we could keep every one of them," reads the announcement. "I’m gonna say that again, to make it clear this isn’t just 'one of those things you say in a layoff message'. We TRULY wish we could keep every one of these people on the team. But we can't. This is a reflection of the tough reality we face as a business and the change needed to give Rec Room a chance to thrive in the years ahead."According to the post, the laid off workers will continue to be paid for the next three months, receive health benefits for the next six months, and have the option to keep their laptop or desktop computer. Rec Room didn't specify how many people were affected.'The writing on the wall became very clear'Back in December 2021, Rec Room raised $145 million for its social platform, bringing the company's lifetime raised funds to around $294 million. According to Brown, the team invested "heavily in creation tools across PC, VR, consoles, and mobile," but the reality "has been harsh." The CEO claims the mobile and console versions never got to the point where "those devices were good for building stuff." Some of the efforts to bridge the gap, including the Maker AI tool, frustrated the studio's "more impactful creators." Related:At the same time, the lower-powered devices still fostered "millions of pieces of content," which reportedly put a strain on the team that had to come up with procedures to review it all. "Making all this run across every device was a massive technical challenge and burden. While our most skilled creators optimized their content cleverly, most creators didn’t—couldn’t, really, because we didn’t provide them with the necessary tooling."Last month, Fajt announced that Rec Room hit a "record-breaking month" for UGC sales thanks to the creations from players, with creator token earnings from room and Watch store sales increasing 47 percent year-over-year."We deliberately started with a small group of creators as the Avatar Studio tool is still in the early stages," Fajt wrote at the time. "All of the early joiners helped us iron out the workflow and onboarding, providing feedback on how to improve our systems and processes. With creators already finding success, we’re ready to expand."Related:Today's announcement continues by saying that supporting the aforementioned scope stretched the team thin, and began to "dig a financial hole that was getting larger every day." The CEO says the studio has been stuck in an "uncomfortable middle ground" during the past few years, wondering whether to keep pushing the internal UGC vision while potentially increasing the frustration of players and the team, or scale back the vision by cutting the team in half."Both paths were painful," Brown wrote. "But ultimately we got to a point where it was clear that staying the course meant low growth, a high burn rate, and no clear path forward. In a word: Unsustainable. The writing on the wall became very clear."Looking forward, Brown says the team will focus on "empowering our very best creators" and "ensuring Rec Room is a great experience for our players.""For those leaving—you will always be part of the Rec Room story," Brown wrote as a closing note about the layoffs. "We thank you for everything, and wish you the best for your next chapter. For those staying—we know this sucks. We know this hurts. Thank you for pushing forward with us—we have hard work ahead, but with a new focus we believe strongly in the future we can build together."Related:Game Developer has reached out to Rec Room for clarification on the number of workers affected.Read more about:Top StoriesLayoffs & Studio ClosuresAbout the AuthorDiego ArgüelloContributing Editor, News, GameDeveloper.comDiego Nicolás Argüello is a freelance journalist and critic from Argentina. Video games helped him to learn English, so now he covers them for places like The New York Times, NPR, Rolling Stone, and more. He also runs Into the Spine, a site dedicated to fostering and supporting new writers, and co-hosted Turnabout Breakdown, a podcast about the Ace Attorney series. He’s most likely playing a rhythm game as you read this.See more from Diego ArgüelloDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    Like
    Love
    Wow
    Sad
    Angry
    636
    · 2 Comments ·0 Shares
  • مرّة كنت نتمشى في الجبل في البليدة، شفت مزارع صغيرة خضراء بشكل يجنن. كان الفلاحين يشتغلوا بجد، يحافظوا على الأرض وينمّوا فيها. هاد المناظر ذكّرتني بفيديو جديد يدور حول فلاحة المزارع الجبلية كأداة للتنمية المحلية وحماية البيئة.

    الفيديو يبرز كيف هالمزارع تساهم في تحسين المعيشة وتوازن البيئة. كيفاش الفلاحين في البليدة يربطوا بين التقليد والتكنولوجيا الحديثة لصناعة مستقبل أفضل.

    شخصياً، عندي ذكريات مع جدي اللي كان عندو مزرعة صغيرة وعلمني كيف نعتني بالأرض. هاد الفيديو يحفزني نفكر في الاستدامة وكيف نقدر نكون جزء من هالتغيير الإيجابي.

    شوفوا الفيديو وخلّوكم مفتوحين للأفكار الجديدة!
    https://www.youtube.com/watch?v=lnxahdnPgMY
    #البليدة #فلاحة #SustainableAgriculture #LocalDevelopment #EcoFriendly
    🌄 مرّة كنت نتمشى في الجبل في البليدة، شفت مزارع صغيرة خضراء بشكل يجنن. كان الفلاحين يشتغلوا بجد، يحافظوا على الأرض وينمّوا فيها. هاد المناظر ذكّرتني بفيديو جديد يدور حول فلاحة المزارع الجبلية كأداة للتنمية المحلية وحماية البيئة. الفيديو يبرز كيف هالمزارع تساهم في تحسين المعيشة وتوازن البيئة. كيفاش الفلاحين في البليدة يربطوا بين التقليد والتكنولوجيا الحديثة لصناعة مستقبل أفضل. 😍 شخصياً، عندي ذكريات مع جدي اللي كان عندو مزرعة صغيرة وعلمني كيف نعتني بالأرض. هاد الفيديو يحفزني نفكر في الاستدامة وكيف نقدر نكون جزء من هالتغيير الإيجابي. شوفوا الفيديو وخلّوكم مفتوحين للأفكار الجديدة! https://www.youtube.com/watch?v=lnxahdnPgMY #البليدة #فلاحة #SustainableAgriculture #LocalDevelopment #EcoFriendly
    Like
    Love
    Wow
    Sad
    Angry
    842
    · 1 Comments ·0 Shares
  • شفتو التين الشوكي يا جماعة؟ عندو قصة زينة في بومرداس! المقال يتحدث عن كيفاش هاد الفاكهة، اللي ينمو في أعالي الجبال، ولات مورد رزق مهم للفلاحين لكن معاها تحديات كبيرة.

    الفلاحين كي يشتغلو على هاد المنتوج، يحوسو يوازنو بين الربح والتحديات اللي تواجههم، خاصة مع الطبيعة. شفت بعيني كيفاش واحد من جيراني يزرع التين الشوكي وكيفاش يكون الفرح في عائلته لما يشوف المحصول يزداد.

    المقال يعطي لمحة عن هاد العالم ويدعينا نفكر في كيفاش نقدروا نستغلو مواردنا الطبيعية بذكاء.

    https://www.elbilad.net/videos/بومرداس-التين-الشوكي-مورد-رزق-موسمي-وتحد-للفلاحين-في-أعالي-الجبال

    #بومرداس #التين_الشوكي #Agriculture #SustainableFarming #Nature
    🌵 شفتو التين الشوكي يا جماعة؟ عندو قصة زينة في بومرداس! المقال يتحدث عن كيفاش هاد الفاكهة، اللي ينمو في أعالي الجبال، ولات مورد رزق مهم للفلاحين لكن معاها تحديات كبيرة. الفلاحين كي يشتغلو على هاد المنتوج، يحوسو يوازنو بين الربح والتحديات اللي تواجههم، خاصة مع الطبيعة. شفت بعيني كيفاش واحد من جيراني يزرع التين الشوكي وكيفاش يكون الفرح في عائلته لما يشوف المحصول يزداد. المقال يعطي لمحة عن هاد العالم ويدعينا نفكر في كيفاش نقدروا نستغلو مواردنا الطبيعية بذكاء. https://www.elbilad.net/videos/بومرداس-التين-الشوكي-مورد-رزق-موسمي-وتحد-للفلاحين-في-أعالي-الجبال #بومرداس #التين_الشوكي #Agriculture #SustainableFarming #Nature
    بومرداس: التين الشوكي .. مورد رزق موسمي وتحدِّ للفلاحين في أعالي الجبال
    www.elbilad.net
    بومرداس: التين الشوكي .. مورد رزق موسمي وتحدِّ للفلاحين في أعالي الجبال.
    Like
    Love
    Wow
    Sad
    Angry
    1K
    · 1 Comments ·0 Shares
  • واش راكم يا جماعة! تحبوا تعرفوا كيفاش المدن بدات ترجع للحياة على ضفاف الماء؟ المقال الجديد يتحدث عن "Urban Waterfronts Are Making a Sustainable Rebound"، وين راهم يطوّروا المساحات المائية لي كانت مهملة ويديروا منها بلايص زينة وصديقة للبيئة!

    في وقتنا هذا، كل واحد منا يعرف كيفاش الطبيعة تلعب دور كبير في حياتنا. كيما لما نديروا نزهة على ضفاف النهر أو نقتسموا قعدة مع الأصحاب فالبحر. هكذا يتم استغلال هاد الفضاءات لخلق بيئات جديدة للعيش والترفيه، وكأنها تحولّت لمراكز حيوية تجمع بين الناس وتزيد من الوعي البيئي.

    بالنسبة لي، أنا نشوفها فرصة رائعة باش نفيقوا ونستغلوا الفضاءات لي عندنا، كيما ما درنا مع البنايات القديمة اللي وليت محلات وأماكن ترفيه. إذا كل واحد منا يساهم بشوية، نقدروا نغيروا المحيط ديالنا.

    أقرأوا المقال وتفكروا في كيفاش نقدروا ن
    🌊 واش راكم يا جماعة! تحبوا تعرفوا كيفاش المدن بدات ترجع للحياة على ضفاف الماء؟ المقال الجديد يتحدث عن "Urban Waterfronts Are Making a Sustainable Rebound"، وين راهم يطوّروا المساحات المائية لي كانت مهملة ويديروا منها بلايص زينة وصديقة للبيئة! في وقتنا هذا، كل واحد منا يعرف كيفاش الطبيعة تلعب دور كبير في حياتنا. كيما لما نديروا نزهة على ضفاف النهر أو نقتسموا قعدة مع الأصحاب فالبحر. هكذا يتم استغلال هاد الفضاءات لخلق بيئات جديدة للعيش والترفيه، وكأنها تحولّت لمراكز حيوية تجمع بين الناس وتزيد من الوعي البيئي. بالنسبة لي، أنا نشوفها فرصة رائعة باش نفيقوا ونستغلوا الفضاءات لي عندنا، كيما ما درنا مع البنايات القديمة اللي وليت محلات وأماكن ترفيه. إذا كل واحد منا يساهم بشوية، نقدروا نغيروا المحيط ديالنا. أقرأوا المقال وتفكروا في كيفاش نقدروا ن
    www.wsj.com
    Plus, a secretive space plane, fake sites and gambling on AI models, in this edition of The Future of Everything newsletter.
    Like
    Love
    Wow
    Angry
    Sad
    424
    · 1 Comments ·0 Shares
  • واش راكم يا جماعة؟ اليوم حبيت نهدرلكم على موضوع يخص الموضة السريعة و الادعاءات الخضراء. إيطاليا قررت تفرض غرامة على "شي إن" بمبلغ 1.1 مليون دولار بسبب ادعاءاتها المضللة حول البيئة، وكل هذا في كلاش مع عالم الموضة السريعة. هذي خطوة مهمة، بصح واش تعني فعلاً بالنسبة لنا كمستهلكين؟

    شخصياً، نحب نكون واعي بمشترياتي، ونحس بلي من الضروري نختار براندات تحترم البيئة، بصح راهم بزاف اللي يستغلو هذي الفكرة. كيفاش نقدروا نفرقوا بين الحقيقي والمزيف؟

    خليونا نفكروا مع بعض في الخيارات اللي نقوموا بها وشنو دورنا كأفراد في دعم البيئة.

    https://forbesmiddleeast.com/consumer/retail/italy-fines-sheins-european-web-operator-$11m-over-misleading-green-claims-in-fast-fashion-push
    #موضة #بيئة #Fashion #SustainableFashion #شي_إن
    واش راكم يا جماعة؟ اليوم حبيت نهدرلكم على موضوع يخص الموضة السريعة و الادعاءات الخضراء. إيطاليا قررت تفرض غرامة على "شي إن" بمبلغ 1.1 مليون دولار بسبب ادعاءاتها المضللة حول البيئة، وكل هذا في كلاش مع عالم الموضة السريعة. هذي خطوة مهمة، بصح واش تعني فعلاً بالنسبة لنا كمستهلكين؟ شخصياً، نحب نكون واعي بمشترياتي، ونحس بلي من الضروري نختار براندات تحترم البيئة، بصح راهم بزاف اللي يستغلو هذي الفكرة. كيفاش نقدروا نفرقوا بين الحقيقي والمزيف؟ خليونا نفكروا مع بعض في الخيارات اللي نقوموا بها وشنو دورنا كأفراد في دعم البيئة. https://forbesmiddleeast.com/consumer/retail/italy-fines-sheins-european-web-operator-$11m-over-misleading-green-claims-in-fast-fashion-push #موضة #بيئة #Fashion #SustainableFashion #شي_إن
    forbesmiddleeast.com
    Italy Fines Shein's European Web Operator $1.1M Over Misleading Green Claims In Fast Fashion Push
    Like
    Love
    Wow
    Sad
    Angry
    345
    · 1 Comments ·0 Shares
  • يا جماعة، شفتوا وين وصلنا؟ ملعب كهربائي 100% في أكسفورد!

    خبر اليوم يقول إن نادي أكسفورد يونايتد حصل على موافقة لبناء ملعب جديد بالكامل يعمل بالكهرباء. المشروع هذا جا معاه فريق هايل من المهندسين والمصممين، وراح يكون فيه سعة 16,000 مشجع! ومن غير هذا، راح يكون عندنا فضاء للأحداث بـ 1,000 شخص، فندق بـ 180 غرفة، مطعم، مركز صحي وحدائق. يا سلااام!

    شخصياً، أحب فكرة المرافق المستدامة، خصوصاً لما نتكلم عن كرة القدم، وين الحماس والطاقة موجودة. تصوروا كيف ستكون الأجواء في هذا الملعب، وكأننا في فيلم خيالي!

    خلونا نفكر في المستقبل، وين كل واحد منا يقدر يساهم في التحول نحو الطاقة النظيفة.

    https://www.archdaily.com/1033361/afl-architects-all-electric-stadium-in-oxford-receives-planning-approval

    #ملعب_كهربائي #SustainableDesign #
    🌟 يا جماعة، شفتوا وين وصلنا؟ ملعب كهربائي 100% في أكسفورد! ⚡ خبر اليوم يقول إن نادي أكسفورد يونايتد حصل على موافقة لبناء ملعب جديد بالكامل يعمل بالكهرباء. المشروع هذا جا معاه فريق هايل من المهندسين والمصممين، وراح يكون فيه سعة 16,000 مشجع! 🤩 ومن غير هذا، راح يكون عندنا فضاء للأحداث بـ 1,000 شخص، فندق بـ 180 غرفة، مطعم، مركز صحي وحدائق. يا سلااام! شخصياً، أحب فكرة المرافق المستدامة، خصوصاً لما نتكلم عن كرة القدم، وين الحماس والطاقة موجودة. تصوروا كيف ستكون الأجواء في هذا الملعب، وكأننا في فيلم خيالي! 😄 خلونا نفكر في المستقبل، وين كل واحد منا يقدر يساهم في التحول نحو الطاقة النظيفة. https://www.archdaily.com/1033361/afl-architects-all-electric-stadium-in-oxford-receives-planning-approval #ملعب_كهربائي #SustainableDesign #
    www.archdaily.com
    Oxford United Football Club's planning application for a new all-electric football stadium has been approved by Cherwell District Council. The scheme was developed by a team that includes AFL Architects, Mott Macdonald engineering services,
    1 Comments ·0 Shares
  • Gearing Up for the Gigawatt Data Center Age

    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations.
    Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game.
    This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction.
    The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance.
    With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out.
    The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed.
    This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack.
    The Data Center Is the Computer

    Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation.
    These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”.
    These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training.
    For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users.
    Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations.
    Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories.
    With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years.
    For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute.
    But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI.
    Spectrum‑X Ethernet: Bringing AI to the Enterprise

    Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale.
    Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management.
    Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions.
    A Portfolio for Scale‑Up and Scale‑Out
    No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon.
    NVLink: Scale Up Inside the Rack
    Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU.
    Photonics: The Next Leap

    To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories.

    Delivering on the Promise of Open Standards

    Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems.

    Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top.

    Toward Million‑GPU AI Factories
    AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure.
    The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.
     
     

     
    #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language modelsbehind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes, where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce”and “all-to-all”. These processes are susceptible to the speed and responsiveness of the network — what engineers call latencyand bandwidth— causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernetspecifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.       #gearing #gigawatt #data #center #age
    Gearing Up for the Gigawatt Data Center Age
    blogs.nvidia.com
    Across the globe, AI factories are rising — massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Internet giants have invested billions in cloud-scale AI infrastructure for their customers. Companies are racing to build AI foundries that will spawn the next generation of products and services. Governments are investing too, eager to harness AI for personalized medicine and language services tailored to national populations. Welcome to the age of AI factories — where the rules are being rewritten and the wiring doesn’t look anything like the old internet. These aren’t typical hyperscale data centers. They’re something else entirely. Think of them as high-performance engines stitched together from tens to hundreds of thousands of GPUs — not just built, but orchestrated, operated and activated as a single unit. And that orchestration? It’s the whole game. This giant data center has become the new unit of computing, and the way these GPUs are connected defines what this unit of computing can do. One network architecture won’t cut it. What’s needed is a layered design with bleeding-edge technologies — like co-packaged optics that once seemed like science fiction. The complexity isn’t a bug; it’s the defining feature. AI infrastructure is diverging fast from everything that came before it, and if there isn’t rethinking on how the pipes connect, scale breaks down. Get the network layers wrong, and the whole machine grinds to a halt. Get it right, and gain extraordinary performance. With that shift comes weight — literally. A decade ago, chips were built to be sleek and lightweight. Now, the cutting edge looks like the multi‑hundred‑pound copper spine of a server rack. Liquid-cooled manifolds. Custom busbars. Copper spines. AI now demands massive, industrial-scale hardware. And the deeper the models go, the more these machines scale up, and out. The NVIDIA NVLink spine, for example, is built from over 5,000 coaxial cables — tightly wound and precisely routed. It moves more data per second than the entire internet. That’s 130 TB/s of GPU-to-GPU bandwidth, fully meshed. This isn’t just fast. It’s foundational. The AI super-highway now lives inside the rack. The Data Center Is the Computer Training the modern large language models (LLMs) behind AI isn’t about burning cycles on a single machine. It’s about orchestrating the work of tens or even hundreds of thousands of GPUs that are the heavy lifters of AI computation. These systems rely on distributed computing, splitting massive calculations across nodes (individual servers), where each node handles a slice of the workload. In training, those slices — typically massive matrices of numbers — need to be regularly merged and updated. That merging occurs through collective operations, such as “all-reduce” (which combines data from all nodes and redistributes the result) and “all-to-all” (where each node exchanges data with every other node). These processes are susceptible to the speed and responsiveness of the network — what engineers call latency (delay) and bandwidth (data capacity) — causing stalls in training. For inference — the process of running trained models to generate answers or predictions — the challenges flip. Retrieval-augmented generation systems, which combine LLMs with search, demand real-time lookups and responses. And in cloud environments, multi-tenant inference means keeping workloads from different customers running smoothly, without interference. That requires lightning-fast, high-throughput networking that can handle massive demand with strict isolation between users. Traditional Ethernet was designed for single-server workloads — not for the demands of distributed AI. Tolerating jitter and inconsistent delivery were once acceptable. Now, it’s a bottleneck. Traditional Ethernet switch architectures were never designed for consistent, predictable performance — and that legacy still shapes their latest generations. Distributed computing requires a scale-out infrastructure built for zero-jitter operation — one that can handle bursts of extreme throughput, deliver low latency, maintain predictable and consistent RDMA performance, and isolate network noise. This is why InfiniBand networking is the gold standard for high-performance computing supercomputers and AI factories. With NVIDIA Quantum InfiniBand, collective operations run inside the network itself using Scalable Hierarchical Aggregation and Reduction Protocol technology, doubling data bandwidth for reductions. It uses adaptive routing and telemetry-based congestion control to spread flows across paths, guarantee deterministic bandwidth and isolate noise. These optimizations let InfiniBand scale AI communication with precision. It’s why NVIDIA Quantum infrastructure connects the majority of the systems on the TOP500 list of the world’s most powerful supercomputers, demonstrating 35% growth in just two years. For clusters spanning dozens of racks, NVIDIA Quantum‑X800 Infiniband switches push InfiniBand to new heights. Each switch provides 144 ports of 800 Gbps connectivity, featuring hardware-based SHARPv4, adaptive routing and telemetry-based congestion control. The platform integrates co‑packaged silicon photonics to minimize the distance between electronics and optics, reducing power consumption and latency. Paired with NVIDIA ConnectX-8 SuperNICs delivering 800 Gb/s per GPU, this fabric links trillion-parameter models and drives in-network compute. But hyperscalers and enterprises have invested billions in their Ethernet software infrastructure. They need a quick path forward that uses the existing ecosystem for AI workloads. Enter NVIDIA Spectrum‑X: a new kind of Ethernet purpose-built for distributed AI. Spectrum‑X Ethernet: Bringing AI to the Enterprise Spectrum‑X reimagines Ethernet for AI. Launched in 2023 Spectrum‑X delivers lossless networking, adaptive routing and performance isolation. The SN5610 switch, based on the Spectrum‑4 ASIC, supports port speeds up to 800 Gb/s and uses NVIDIA’s congestion control to maintain 95% data throughput at scale. Spectrum‑X is fully standards‑based Ethernet. In addition to supporting Cumulus Linux, it supports the open‑source SONiC network operating system — giving customers flexibility. A key ingredient is NVIDIA SuperNICs — based on NVIDIA BlueField-3 or ConnectX-8 — which provide up to 800 Gb/s RoCE connectivity and offload packet reordering and congestion management. Spectrum-X brings InfiniBand’s best innovations — like telemetry-driven congestion control, adaptive load balancing and direct data placement — to Ethernet, enabling enterprises to scale to hundreds of thousands of GPUs. Large-scale systems with Spectrum‑X, including the world’s most colossal AI supercomputer, have achieved 95% data throughput with zero application latency degradation. Standard Ethernet fabrics would deliver only ~60% throughput due to flow collisions. A Portfolio for Scale‑Up and Scale‑Out No single network can serve every layer of an AI factory. NVIDIA’s approach is to match the right fabric to the right tier, then tie everything together with software and silicon. NVLink: Scale Up Inside the Rack Inside a server rack, GPUs need to talk to each other as if they were different cores on the same chip. NVIDIA NVLink and NVLink Switch extend GPU memory and bandwidth across nodes. In an NVIDIA GB300 NVL72 system, 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs are connected in a single NVLink domain, with an aggregate bandwidth of 130 TB/s. NVLink Switch technology further extends this fabric: a single GB300 NVL72 system can offer 130 TB/s of GPU bandwidth, enabling clusters to support 9x the GPU count of a single 8‑GPU server. With NVLink, the entire rack becomes one large GPU. Photonics: The Next Leap To reach million‑GPU AI factories, the network must break the power and density limits of pluggable optics. NVIDIA Quantum-X and Spectrum-X Photonics switches integrate silicon photonics directly into the switch package, delivering 128 to 512 ports of 800 Gb/s with total bandwidths ranging from 100 Tb/s to 400 Tb/s. These switches offer 3.5x more power efficiency and 10x better resiliency compared with traditional optics, paving the way for gigawatt‑scale AI factories. Delivering on the Promise of Open Standards Spectrum‑X and NVIDIA Quantum InfiniBand are built on open standards. Spectrum‑X is fully standards‑based Ethernet with support for open Ethernet stacks like SONiC, while NVIDIA Quantum InfiniBand and Spectrum-X conform to the InfiniBand Trade Association’s InfiniBand and RDMA over Converged Ethernet (RoCE) specifications. Key elements of NVIDIA’s software stack — including NCCL and DOCA libraries — run on a variety of hardware, and partners such as Cisco, Dell Technologies, HPE and Supermicro integrate Spectrum-X into their systems. Open standards create the foundation for interoperability, but real-world AI clusters require tight optimization across the entire stack — GPUs, NICs, switches, cables and software. Vendors that invest in end‑to‑end integration deliver better latency and throughput. SONiC, the open‑source network operating system hardened in hyperscale data centers, eliminates licensing and vendor lock‑in and allows intense customization, but operators still choose purpose‑built hardware and software bundles to meet AI’s performance needs. In practice, open standards alone don’t deliver deterministic performance; they need innovation layered on top. Toward Million‑GPU AI Factories AI factories are scaling fast. Governments in Europe are building seven national AI factories, while cloud providers and enterprises across Japan, India and Norway are rolling out NVIDIA‑powered AI infrastructure. The next horizon is gigawatt‑class facilities with a million GPUs. To get there, the network must evolve from an afterthought to a pillar of AI infrastructure. The lesson from the gigawatt data center age is simple: the data center is now the computer. NVLink stitches together GPUs inside the rack. NVIDIA Quantum InfiniBand scales them across it. Spectrum-X brings that performance to broader markets. Silicon photonics makes it sustainable. Everything is open where it matters, optimized where it counts.      
    2 Comments ·0 Shares
More Results
ollo https://www.ollo.ws