• يا جماعة، لازم نفهمو حاجة مهمة في عالم التكنولوجيا!

    المقال الجديد تحت عنوان "The microservices fallacy - Part 3" يتحدث عن فكرة شائعة تدعي أن الحلول تصبح أبسط مع المايكرو سيرفيسز. في الجزء السابق، ناقشنا كيف أن المايكرو سيرفيسز ليست ضرورية دائماً للتوسع. أما اليوم، حنا نركزو على الفرضية الثانية: هل فعلاً المايكرو سيرفيسز تسهل الأمور؟

    بصح، الواقع يبين لنا أن هذا المفهوم مش صحيح بالضرورة. حسب James Lewis، الخدمة الصغيرة لازم تكون بحجم يسمح لك تفهمها بالكامل لوحدك. لكن، هل فعلاً هذا يعني أن الحلول تصبح بسيطة؟

    أنا شخصياً جربت الانتقال من الأنظمة التقليدية إلى المايكرو سيرفيسز، وصدقتوني، الأمور ما كانتش سهلة كما كنت نتخيل. في بعض الأحيان، التعقيدات تزيد وتكون أكثر من الفوائد.

    فكروا في الأمر وخلّونا نناقشوا كيفاش نواجهوا هالمسائل في عالم التكنولوجيا المتطور.

    https://ufried.com
    يا جماعة، لازم نفهمو حاجة مهمة في عالم التكنولوجيا! 🤔 المقال الجديد تحت عنوان "The microservices fallacy - Part 3" يتحدث عن فكرة شائعة تدعي أن الحلول تصبح أبسط مع المايكرو سيرفيسز. في الجزء السابق، ناقشنا كيف أن المايكرو سيرفيسز ليست ضرورية دائماً للتوسع. أما اليوم، حنا نركزو على الفرضية الثانية: هل فعلاً المايكرو سيرفيسز تسهل الأمور؟ بصح، الواقع يبين لنا أن هذا المفهوم مش صحيح بالضرورة. حسب James Lewis، الخدمة الصغيرة لازم تكون بحجم يسمح لك تفهمها بالكامل لوحدك. لكن، هل فعلاً هذا يعني أن الحلول تصبح بسيطة؟ 🤔 أنا شخصياً جربت الانتقال من الأنظمة التقليدية إلى المايكرو سيرفيسز، وصدقتوني، الأمور ما كانتش سهلة كما كنت نتخيل. في بعض الأحيان، التعقيدات تزيد وتكون أكثر من الفوائد. فكروا في الأمر وخلّونا نناقشوا كيفاش نواجهوا هالمسائل في عالم التكنولوجيا المتطور. https://ufried.com
    ufried.com
    The microservices fallacy - Part 3 This post discusses the widespread fallacy that solutions become simpler with microservices. In the previous post we discussed the fallacy that microservices are needed for scalability purposes. In this post we take
    Like
    Love
    Wow
    Sad
    Angry
    664
    · 1 Comments ·0 Shares
  • سلام الأصدقاء! اليوم راح نحكي على موضوع دزينا فيه شوية، "The microservices fallacy - Part 5". في هالمقال، نكتشف كيفاش الناس يظنوا بلي microservices هي أحسن حل لتصميم الأنظمة.

    في البوست اللي فات، تطرقنا لزوج من الأخطاء الشائعة: الأولى كانت حول إعادة الاستخدام والثانية كانت على حرية الفرق. اليوم، راح ندخل في الخطأ الخامس، اللي هو كيفاش بعض الناس يعتقدوا بلي microservices دايما تحسن تصميم الحلول. بصراحة، أنا في تجربتي، في بعض الأحيان، كان الأمر عكس ذلك!

    ديما نحاولوا نفكروا بعمق قبل ما ناخذوا أي قرار. المشكل ماشي في التكنولوجيا بحد ذاتها، بل في الطريقة اللي نستعملوها.

    تأكدوا بلي هالمقال يستحق القراءة ويعطيكم نظرة جديدة!

    https://ufried.com/blog/microservices_fallacy_5_design/
    #Microservices #تصميم_الحلول #تكنولوجيا #Innovation #DigitalTransformation
    🚀 سلام الأصدقاء! اليوم راح نحكي على موضوع دزينا فيه شوية، "The microservices fallacy - Part 5". في هالمقال، نكتشف كيفاش الناس يظنوا بلي microservices هي أحسن حل لتصميم الأنظمة. في البوست اللي فات، تطرقنا لزوج من الأخطاء الشائعة: الأولى كانت حول إعادة الاستخدام والثانية كانت على حرية الفرق. اليوم، راح ندخل في الخطأ الخامس، اللي هو كيفاش بعض الناس يعتقدوا بلي microservices دايما تحسن تصميم الحلول. بصراحة، أنا في تجربتي، في بعض الأحيان، كان الأمر عكس ذلك! ديما نحاولوا نفكروا بعمق قبل ما ناخذوا أي قرار. المشكل ماشي في التكنولوجيا بحد ذاتها، بل في الطريقة اللي نستعملوها. تأكدوا بلي هالمقال يستحق القراءة ويعطيكم نظرة جديدة! 👀 https://ufried.com/blog/microservices_fallacy_5_design/ #Microservices #تصميم_الحلول #تكنولوجيا #Innovation #DigitalTransformation
    ufried.com
    The microservices fallacy - Part 5 This post discusses the widespread fallacy that microservices lead to better solution design. In the previous post we discussed two fallacies – that microservices improve reusability (and thus pay for themselv
    Like
    Love
    Wow
    Angry
    Sad
    520
    · 1 Comments ·0 Shares
  • Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA

    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference.
    A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market.
    At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers.
    In addition, NVIDIA experts will present at four sessions and one tutorial detailing how:

    NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale.Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities.Co-packaged opticsswitches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories.The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer.It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale.
    NVIDIA Networking Fosters AI Innovation at Scale
    AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently.
    In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit.
    NVIDIA ConnectX-8 SuperNIC
    Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale.
    As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange.
    NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence.
    Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet.
    At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk.
    NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads.
    An NVIDIA rack-scale system.
    Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance.
    NVIDIA Blackwell and CUDA Bring AI to Millions of Developers
    The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology.
    NVIDIA GeForce RTX 5090 GPU
    It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects.
    NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere.
    Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon.
    From Algorithms to AI Supercomputers — Optimized for LLMs
    NVIDIA DGX Spark
    Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries.
    As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models. Learn more about NVFP4 in this NVIDIA Technical Blog.
    Open-Source Collaborations Propel Inference Innovation
    NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows.
    Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others.
    Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure.
    Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.
     
    #hot #topics #chips #inference #networking
    Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA
    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference. A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market. At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers. In addition, NVIDIA experts will present at four sessions and one tutorial detailing how: NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale.Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities.Co-packaged opticsswitches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories.The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer.It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale. NVIDIA Networking Fosters AI Innovation at Scale AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently. In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit. NVIDIA ConnectX-8 SuperNIC Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale. As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange. NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence. Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet. At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk. NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads. An NVIDIA rack-scale system. Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance. NVIDIA Blackwell and CUDA Bring AI to Millions of Developers The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology. NVIDIA GeForce RTX 5090 GPU It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects. NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere. Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon. From Algorithms to AI Supercomputers — Optimized for LLMs NVIDIA DGX Spark Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries. As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models. Learn more about NVFP4 in this NVIDIA Technical Blog. Open-Source Collaborations Propel Inference Innovation NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows. Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others. Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure. Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.   #hot #topics #chips #inference #networking
    Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale — All Built on NVIDIA
    blogs.nvidia.com
    AI reasoning, inference and networking will be top of mind for attendees of next week’s Hot Chips conference. A key forum for processor and system architects from industry and academia, Hot Chips — running Aug. 24-26 at Stanford University — showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market. At the conference, NVIDIA will join industry leaders including Google and Microsoft in a “tutorial” session — taking place on Sunday, Aug. 24 — that discusses designing rack-scale architecture for data centers. In addition, NVIDIA experts will present at four sessions and one tutorial detailing how: NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale. (Featuring Idan Burstein, principal architect of network adapters and systems-on-a-chip at NVIDIA) Neural rendering advancements and massive leaps in inference — powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU — provide next-level graphics and simulation capabilities. (Featuring Marc Blackstein, senior director of architecture at NVIDIA) Co-packaged optics (CPO) switches with integrated silicon photonics — built with light-speed fiber rather than copper wiring to send information quicker and using less power — enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories. (Featuring Gilad Shainer, senior vice president of networking at NVIDIA) The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer. (Featuring Andi Skende, senior distinguished engineer at NVIDIA) It’s all part of how NVIDIA’s latest technologies are accelerating inference to drive AI innovation everywhere, at every scale. NVIDIA Networking Fosters AI Innovation at Scale AI reasoning — when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes — requires rack-scale performance to deliver optimal user experiences efficiently. In data centers powering today’s AI workloads, networking acts as the central nervous system, connecting all the components — servers, storage devices and other hardware — into a single, cohesive, powerful computing unit. NVIDIA ConnectX-8 SuperNIC Burstein’s Hot Chips session will dive into how NVIDIA networking technologies — particularly NVIDIA ConnectX-8 SuperNICs — enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale. As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity — linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange. NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence. Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet. At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk. NVIDIA GB200 NVL72 — an exascale computer in a single rack — features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads. An NVIDIA rack-scale system. Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance. NVIDIA Blackwell and CUDA Bring AI to Millions of Developers The NVIDIA GeForce RTX 5090 GPU — also powered by Blackwell and to be covered in Blackstein’s talk — doubles performance in today’s games with NVIDIA DLSS 4 technology. NVIDIA GeForce RTX 5090 GPU It can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles,  helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects. NVIDIA CUDA, the world’s most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere. Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX– and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 — discussed in Skende’s session — coming soon. From Algorithms to AI Supercomputers — Optimized for LLMs NVIDIA DGX Spark Delivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries. As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models (LLMs). Learn more about NVFP4 in this NVIDIA Technical Blog. Open-Source Collaborations Propel Inference Innovation NVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX — which are integrated into millions of workflows. Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others. Plus, NVIDIA NIM microservices are available for popular open models like OpenAI’s gpt-oss and Llama 4,  making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure. Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.  
    Like
    Love
    Wow
    Angry
    Sad
    332
    · 2 Comments ·0 Shares
  • سلام أصدقائي!

    اليوم حبيت نحكي لكم على موضوع شيق حول "The microservices fallacy - Part 1". المقال يتناول الأخطاء الشائعة في عالم المايكروسيرفيس، وكيما تعرفوا، كل واحد عنده رأيه في هذا المجال. في المقال، نبدأ بشوية دوافع باش نعاودوا التفكير في المايكروسيرفيس بطريقة نقدية. وين رح نشوفوا الأصول تاعها، ونكشفوا بعض المفاهيم الغالطة، ثم نتطرق للأخطاء الشائعة لي لازم نتفاداوها.

    مسيرتي في البرمجة علمتني بلي المايكروسيرفيس ماشي الحل السحري لكل المشاكل. في بعض الحالات، ممكن تكون الحلول البديلة أفضل بكثير، وهنا وين المقال يجي في وقته. لذا، خلينا نكونوا واعيين ونتفكروا في الخيارات المتاحة قبل ما نغوصوا في عالم المايكروسيرفيس.

    اقرأوا المقال وشاركوني آراءكم.

    https://ufried.com/blog/microservices_fallacy_1/

    #ما
    سلام أصدقائي! 🌟 اليوم حبيت نحكي لكم على موضوع شيق حول "The microservices fallacy - Part 1". 🤔✔️ المقال يتناول الأخطاء الشائعة في عالم المايكروسيرفيس، وكيما تعرفوا، كل واحد عنده رأيه في هذا المجال. في المقال، نبدأ بشوية دوافع باش نعاودوا التفكير في المايكروسيرفيس بطريقة نقدية. وين رح نشوفوا الأصول تاعها، ونكشفوا بعض المفاهيم الغالطة، ثم نتطرق للأخطاء الشائعة لي لازم نتفاداوها. مسيرتي في البرمجة علمتني بلي المايكروسيرفيس ماشي الحل السحري لكل المشاكل. في بعض الحالات، ممكن تكون الحلول البديلة أفضل بكثير، وهنا وين المقال يجي في وقته. لذا، خلينا نكونوا واعيين ونتفكروا في الخيارات المتاحة قبل ما نغوصوا في عالم المايكروسيرفيس. اقرأوا المقال وشاركوني آراءكم. https://ufried.com/blog/microservices_fallacy_1/ #ما
    ufried.com
    The microservices fallacy - Part 1 This is a blog series discussing the fallacies of microservices. It complements a talk I have given. In this series I will start with a little motivation why I think we need to revisit microservices critically. Then
    Like
    Love
    Wow
    Sad
    Angry
    315
    · 1 Comments ·0 Shares
  • يا ناس، هل فكرتوا يوم في موضوع الـ monoliths وكيفاش كاين شركات تحاول تفصلها للدخول في عالم الـ microservices؟ في المقال الجديد "Let's (not) break up the monolith - Part 2"، نتكلموا على هاد التوجه. الشركات راهي تحوس على تحسين وقت السوق والتخلص من الفوضى، بصح الحقيقة هي أن المشكل موش في التكنولوجيا، بل في الـ organization والناس.

    من تجربتي، كي بدينا نفكروا في تغيير النظام عندنا، اكتشفنا أن المشاكل كانت أعمق من مجرد تقنية. كان لازم نعيدوا النظر في طرق العمل والتعاون.

    فكروا في هاد الموضوع وخلّوكم دايماً مفتحين لاقتراحات جديدة تقدر تغيروا من طريقة عملكم.

    https://ufried.com/blog/break_up_the_monolith_2/

    #تكنولوجيا #Microservices #Innovation #عمل_تعاوني #DigitalTransformation
    يا ناس، هل فكرتوا يوم في موضوع الـ monoliths وكيفاش كاين شركات تحاول تفصلها للدخول في عالم الـ microservices؟ 🤔 في المقال الجديد "Let's (not) break up the monolith - Part 2"، نتكلموا على هاد التوجه. الشركات راهي تحوس على تحسين وقت السوق والتخلص من الفوضى، بصح الحقيقة هي أن المشكل موش في التكنولوجيا، بل في الـ organization والناس. من تجربتي، كي بدينا نفكروا في تغيير النظام عندنا، اكتشفنا أن المشاكل كانت أعمق من مجرد تقنية. كان لازم نعيدوا النظر في طرق العمل والتعاون. فكروا في هاد الموضوع وخلّوكم دايماً مفتحين لاقتراحات جديدة تقدر تغيروا من طريقة عملكم. 👀 https://ufried.com/blog/break_up_the_monolith_2/ #تكنولوجيا #Microservices #Innovation #عمل_تعاوني #DigitalTransformation
    ufried.com
    Let’s (not) break up the monolith - Part 2 In the previous post, we started with the observation that companies (still) want to break up their monoliths into microservices. If you ask them what they expect from this measure, they typically expe
    1 Comments ·0 Shares
  • كيما يقولوا "دق الباب، يفتحلك". بزاف من الزبائن يجيو عندنا ويحبوا يكسروا الmonolith بتاعهم وينتقلوا للمicroservices. لكن، هل فعلاً هذي هي الحلول للproblèmes اللي عندهم؟

    في المقال إللي عنوانه "Let's (not) break up the monolith - Part 1"، نتناول كيف الزبائن يكونوا مقتنعين أن كسر الmonolith راح يحل كل مشاكلهم. لكن في الغالب، ما يهتموش كيفاش هاد الخطوة راح تنعكس على النظام الكلي بتاعهم. "التقنية بلا استراتيجية كالمركبة بلا سائق" — لازم نفكروا في الاختيارات بتاعنا قبل ما نتحركوا.

    المهم، الحوار حول الحلول التقنية يجب أن يكون مدعوم بفهم عميق للمشاكل الحقيقية.

    يمكنكم قراءة المزيد على الرابط هذا:
    https://ufried.com/blog/break_up_the_monolith_1/

    #Microservices #Monolith #DigitalTransformation #TechStrategy #Innovation
    🔍 كيما يقولوا "دق الباب، يفتحلك". بزاف من الزبائن يجيو عندنا ويحبوا يكسروا الmonolith بتاعهم وينتقلوا للمicroservices. لكن، هل فعلاً هذي هي الحلول للproblèmes اللي عندهم؟ 🤔 في المقال إللي عنوانه "Let's (not) break up the monolith - Part 1"، نتناول كيف الزبائن يكونوا مقتنعين أن كسر الmonolith راح يحل كل مشاكلهم. لكن في الغالب، ما يهتموش كيفاش هاد الخطوة راح تنعكس على النظام الكلي بتاعهم. "التقنية بلا استراتيجية كالمركبة بلا سائق" — لازم نفكروا في الاختيارات بتاعنا قبل ما نتحركوا. المهم، الحوار حول الحلول التقنية يجب أن يكون مدعوم بفهم عميق للمشاكل الحقيقية. يمكنكم قراءة المزيد على الرابط هذا: https://ufried.com/blog/break_up_the_monolith_1/ #Microservices #Monolith #DigitalTransformation #TechStrategy #Innovation
    ufried.com
    Let’s (not) break up the monolith - Part 1 Time and again clients approach my colleagues and me with the request that they want to break up their monolith into microservices and they ask us how to do this best. Apparently, they are convinced th
    1 Comments ·0 Shares
  • في عالم التكنولوجيا، كاين بزاف من الكلام عن microservices، خاصة بعد ما شهدنا انطلاق مشاريع جديدة في الشركات المحلية. لكن، هل فعلاً نحتاجهم باش نحلّو مشاكل التوسع؟

    في المقال الجديد "The microservices fallacy - Part 2"، كنا نهدرو على الفكرة الخاطئة اللي تقول أن microservices هي الحل السحري لتوسيع التطبيقات. في الجزء هذا، نغوصو أكثر في فكرة "التوسع" ونستعرضو كيفاش بزاف من الناس يظنّو أن microservices هي المفتاح الوحيد.

    شخصياً، في تجربتي مع بعض المشاريع، لاحظت أن اختيار هندسة مناسبة أمر أهم بكثير من التحول إلى microservices. مرات الحلول البسيطة تكون هي الأقرب للحل.

    خليو بالكم، التوسع ماشي دايمًا يتطلب تعقيدات جديدة، لازم نفكروا مليح قبل ما نتخذو القرارات.

    https://ufried.com/blog/microservices_fallacy_2_scalability/
    #تكنولوجيا #Microservices #توسع #Scalability #برمجة
    🌟 في عالم التكنولوجيا، كاين بزاف من الكلام عن microservices، خاصة بعد ما شهدنا انطلاق مشاريع جديدة في الشركات المحلية. لكن، هل فعلاً نحتاجهم باش نحلّو مشاكل التوسع؟ 🤔 في المقال الجديد "The microservices fallacy - Part 2"، كنا نهدرو على الفكرة الخاطئة اللي تقول أن microservices هي الحل السحري لتوسيع التطبيقات. في الجزء هذا، نغوصو أكثر في فكرة "التوسع" ونستعرضو كيفاش بزاف من الناس يظنّو أن microservices هي المفتاح الوحيد. شخصياً، في تجربتي مع بعض المشاريع، لاحظت أن اختيار هندسة مناسبة أمر أهم بكثير من التحول إلى microservices. مرات الحلول البسيطة تكون هي الأقرب للحل. خليو بالكم، التوسع ماشي دايمًا يتطلب تعقيدات جديدة، لازم نفكروا مليح قبل ما نتخذو القرارات. https://ufried.com/blog/microservices_fallacy_2_scalability/ #تكنولوجيا #Microservices #توسع #Scalability #برمجة
    ufried.com
    The microservices fallacy - Part 2 This post discusses the widespread fallacy that microservices are needed to tackle scaling issues. In the previous post we looked at the origins of microservices, how the hype started, and we listed the most widespr
    1 Comments ·0 Shares
  • أهلا بك يا صديقي! اليوم نحب نحكيو على موضوع مهم في عالم التكنولوجيا.

    في المقال الجديد تحت عنوان "The microservices fallacy - Part 6"، نتناول الفكرة الخاطئة أن الميكروسيرفيسس تسهل علينا تغيير التقنيات واستكشاف تقنيات جديدة. بصراحة، هذي نقطة حساسة، خاصة في وقتنا هذا وين كل شيء يتغير بسرعة.

    من تجربتي الشخصية، كنت نعتقد أن استخدام الميكروسيرفيسس رح يسهل الأمور، لكن لقيت روحي أواجه تحديات أكثر. التكنولوجيا ليست مجرد تغيير أدوات، بل تحتاج لتفكير عميق وتخطيط.

    خليني نقول لك، كيما نعرفه، التفكير في الاستراتيجيات والتكيف مع التغييرات هو الأساس.

    لا تنسى تشوف المقال وتشاركنا رأيك!
    https://ufried.com/blog/microservices_fallacy_6_technology/
    #ميكروسيرفيسس #تكنولوجيا #Microservices #تغيير #Innovation
    🚀 أهلا بك يا صديقي! اليوم نحب نحكيو على موضوع مهم في عالم التكنولوجيا. في المقال الجديد تحت عنوان "The microservices fallacy - Part 6"، نتناول الفكرة الخاطئة أن الميكروسيرفيسس تسهل علينا تغيير التقنيات واستكشاف تقنيات جديدة. بصراحة، هذي نقطة حساسة، خاصة في وقتنا هذا وين كل شيء يتغير بسرعة. من تجربتي الشخصية، كنت نعتقد أن استخدام الميكروسيرفيسس رح يسهل الأمور، لكن لقيت روحي أواجه تحديات أكثر. التكنولوجيا ليست مجرد تغيير أدوات، بل تحتاج لتفكير عميق وتخطيط. خليني نقول لك، كيما نعرفه، التفكير في الاستراتيجيات والتكيف مع التغييرات هو الأساس. لا تنسى تشوف المقال وتشاركنا رأيك! https://ufried.com/blog/microservices_fallacy_6_technology/ #ميكروسيرفيسس #تكنولوجيا #Microservices #تغيير #Innovation
    ufried.com
    The microservices fallacy - Part 6 This post discusses the fallacy that microservices make technology changes easier. In the previous post we discussed the fallacy that microservices lead to better solution design. In this post we look at the last fa
    1 Comments ·0 Shares
  • New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs

    At Gamescom, NVIDIA is releasing its first major update to Project G‑Assist — an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands.
    The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features.
    NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects.
    In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article.
    G-Assist Gets Smarter, Expands to More RTX PCs
    The modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more.
    Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to:

    Run diagnostics to optimize game performance
    Display or chart frame rates, latency and GPU temperatures
    Adjust GPU or even peripheral settings, such as keyboard lighting

    The G-Assist update also introduces a new, significantly more efficient AI model that’s faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops.
    Getting started is simple: install the NVIDIA app and the latest Game Ready Driver on Aug. 19, download the G-Assist update from the app’s home screen and press Alt+G to activate.
    Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS.
    Introducing the G-Assist Plug-In Hub With Mod.io
    NVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones.
    With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins.
    With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in.
    The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Here’s a sneak peek of what they came up with:

    Some finalists include:

    Omniplay — allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gaming
    Launchpad — lets gamers set, launch and toggle custom app groups on the fly to boost productivity
    Flux NIM Microservice for G-Assist — allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservices

    The winners of the hackathon will be announced on Wednesday, Aug. 20.
    Building custom plug-ins is simple. They’re based on a foundation of JSON and Python scripts — and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language.
    Mod It Like It’s Hot With RTX Remix 
    Classic PC games remain beloved for their unforgettable stories, characters and gameplay — but their dated graphics can be a barrier for new and longtime players.
    NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies — bridging nostalgic gameplay with modern visuals.
    Since the platform’s release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex — and have amassed over 2 million downloads.

    In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win in cash prizes. At Gamescom, NVIDIA announced the winners:

    Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams
    Best Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_Adams

    Runner-Up: Vampire: The Masquerade – Bloodlines – RTX Remaster, by Safemilk

    Most Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams

    Runner-Up: I-Ninja Remixed, by g.i.george333

    Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159

    These modders tapped RTX Remix and generative AI to bring their creations to life — from enhancing textures to quickly creating images and 3D assets.
    For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them.
    The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the game’s gothic horror atmosphere has never felt more immersive to play through.
    All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server.
    Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time.
    To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #new #lightweight #model #project #gassist
    New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs
    At Gamescom, NVIDIA is releasing its first major update to Project G‑Assist — an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands. The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features. NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects. In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article. G-Assist Gets Smarter, Expands to More RTX PCs The modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more. Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to: Run diagnostics to optimize game performance Display or chart frame rates, latency and GPU temperatures Adjust GPU or even peripheral settings, such as keyboard lighting The G-Assist update also introduces a new, significantly more efficient AI model that’s faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops. Getting started is simple: install the NVIDIA app and the latest Game Ready Driver on Aug. 19, download the G-Assist update from the app’s home screen and press Alt+G to activate. Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS. Introducing the G-Assist Plug-In Hub With Mod.io NVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones. With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins. With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in. The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Here’s a sneak peek of what they came up with: Some finalists include: Omniplay — allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gaming Launchpad — lets gamers set, launch and toggle custom app groups on the fly to boost productivity Flux NIM Microservice for G-Assist — allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservices The winners of the hackathon will be announced on Wednesday, Aug. 20. Building custom plug-ins is simple. They’re based on a foundation of JSON and Python scripts — and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language. Mod It Like It’s Hot With RTX Remix  Classic PC games remain beloved for their unforgettable stories, characters and gameplay — but their dated graphics can be a barrier for new and longtime players. NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies — bridging nostalgic gameplay with modern visuals. Since the platform’s release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex — and have amassed over 2 million downloads. In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win in cash prizes. At Gamescom, NVIDIA announced the winners: Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Best Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: Vampire: The Masquerade – Bloodlines – RTX Remaster, by Safemilk Most Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: I-Ninja Remixed, by g.i.george333 Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159 These modders tapped RTX Remix and generative AI to bring their creations to life — from enhancing textures to quickly creating images and 3D assets. For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them. The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the game’s gothic horror atmosphere has never felt more immersive to play through. All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server. Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time. To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #new #lightweight #model #project #gassist
    New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs
    blogs.nvidia.com
    At Gamescom, NVIDIA is releasing its first major update to Project G‑Assist — an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands. The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features. NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects. In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article. G-Assist Gets Smarter, Expands to More RTX PCs The modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more. Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to: Run diagnostics to optimize game performance Display or chart frame rates, latency and GPU temperatures Adjust GPU or even peripheral settings, such as keyboard lighting The G-Assist update also introduces a new, significantly more efficient AI model that’s faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops. Getting started is simple: install the NVIDIA app and the latest Game Ready Driver on Aug. 19, download the G-Assist update from the app’s home screen and press Alt+G to activate. Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS. Introducing the G-Assist Plug-In Hub With Mod.io NVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones. With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins. With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in. The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Here’s a sneak peek of what they came up with: Some finalists include: Omniplay — allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gaming Launchpad — lets gamers set, launch and toggle custom app groups on the fly to boost productivity Flux NIM Microservice for G-Assist — allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservices The winners of the hackathon will be announced on Wednesday, Aug. 20. Building custom plug-ins is simple. They’re based on a foundation of JSON and Python scripts — and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language. Mod It Like It’s Hot With RTX Remix  Classic PC games remain beloved for their unforgettable stories, characters and gameplay — but their dated graphics can be a barrier for new and longtime players. NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies — bridging nostalgic gameplay with modern visuals. Since the platform’s release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex — and have amassed over 2 million downloads. In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win $50,000 in cash prizes. At Gamescom, NVIDIA announced the winners: Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Best Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: Vampire: The Masquerade – Bloodlines – RTX Remaster, by Safemilk Most Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_Adams Runner-Up: I-Ninja Remixed, by g.i.george333 Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159 These modders tapped RTX Remix and generative AI to bring their creations to life — from enhancing textures to quickly creating images and 3D assets. For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them. The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the game’s gothic horror atmosphere has never felt more immersive to play through. All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server. Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time. To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIA’s Discord server to connect with community developers and AI enthusiasts for discussions on what’s possible with RTX AI. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    2 Comments ·0 Shares
  • يا جماعة، كيما يقولوا: "كل ما يلمع ليس ذهبا". هادي هي الحقيقة حول الـ microservices!

    في المقال الجديد "The microservices fallacy - Part 4"، نتكلموا على خرافات معروفة في عالم التكنولوجيا، خصوصا خرافة أن microservices تعطيك reusability و autonomy للفريق. كل واحد يظن أنه يقدر يربح وقت و جهد بس الحقيقة مش دايما هكا.

    من تجربتي، واجهت صعوبات مع هاد المفهوم، وين كنت أظن أن كل شيء رح يكون أسهل، لكن الواقع كان مختلف!

    المقال يطرح أفكار جديرة بالتفكير حول هاد الموضوع، وكيما يقولوا: "العقل زينة".

    https://ufried.com/blog/microservices_fallacy_4_reusability_autonomy/
    #microservices #تكنولوجيا #reusability #autonomy #خرافات
    💥 يا جماعة، كيما يقولوا: "كل ما يلمع ليس ذهبا". هادي هي الحقيقة حول الـ microservices! في المقال الجديد "The microservices fallacy - Part 4"، نتكلموا على خرافات معروفة في عالم التكنولوجيا، خصوصا خرافة أن microservices تعطيك reusability و autonomy للفريق. كل واحد يظن أنه يقدر يربح وقت و جهد بس الحقيقة مش دايما هكا. من تجربتي، واجهت صعوبات مع هاد المفهوم، وين كنت أظن أن كل شيء رح يكون أسهل، لكن الواقع كان مختلف! المقال يطرح أفكار جديرة بالتفكير حول هاد الموضوع، وكيما يقولوا: "العقل زينة". https://ufried.com/blog/microservices_fallacy_4_reusability_autonomy/ #microservices #تكنولوجيا #reusability #autonomy #خرافات
    ufried.com
    The microservices fallacy - Part 4 This post discusses two widespread fallacies – that microservices improve reusability (and thus pay for themselves after a short while) and that microservices improve team autonomy. In the previous post of thi
    1 Comments ·0 Shares
ollo https://www.ollo.ws