Engineering Blog

                            

Blog – 2 Column & Sidebar

Scaling AI/ML Infrastructure at Uber
AI

Scaling AI/ML Infrastructure at Uber

Scaling AI/ML Infrastructure at Uber: Optimizing for Efficiency and Growth This blog post explores Uber’s journey in scaling its AI/ML infrastructure to support a rapidly evolving landscape of applications. As Uber’s models have grown in complexity, from XGBoost to deep learning and generative AI, the need for efficient and adaptable infrastructure has become crucial. Optimizing…

Powering Up Generative AI with Real-Time Streaming
AI

Powering Up Generative AI with Real-Time Streaming

This blog post dives into the world of generative AI and how real-time streaming data enhances its capabilities. Generative AI models, like large language models (LLMs), excel at various tasks such as text generation, chatbots, and summarization. However, they traditionally rely on static datasets, limiting their adaptability to ever-changing information. Why Streaming Data Matters Streaming data…

Embrace Agility: Building Flexible Data Workflows with Portable ETL

Embrace Agility: Building Flexible Data Workflows with Portable ETL

Imagine a world where ETL pipelines run seamlessly across any environment – from robust servers to resource-constrained edge devices. This is the future promised by portable ETL, a revolutionary approach that prioritizes flexibility and adaptability. This blog post explores the limitations of traditional ETL frameworks and how portable ETL empowers data teams to: Beyond Traditional…

Effortless Kubernetes Management: Akamai Now Supports Cluster API

Effortless Kubernetes Management: Akamai Now Supports Cluster API

Exciting news for developers and IT professionals who leverage Kubernetes! Akamai recently announced support for the Kubernetes Cluster API (CAPL), streamlining cluster creation, configuration, and management. What is Cluster API (CAPI)? CAPI, or “kappy” for short, is an open-source project that introduces a declarative approach to managing Kubernetes clusters. Similar to Infrastructure as Code (IaC)…

LLM Development Unplugged: A Practical Guide from Code to Deployment

LLM Development Unplugged: A Practical Guide from Code to Deployment

Date : June 5Time : 1:00 PM ET/17:00 UTCPresenters : Sebastian Raschka , Marlene Mhangami Introduction: Join us for an insightful talk that will walk you through the crucial stages involved in developing large language models (LLMs). Whether you’re a seasoned developer, a data scientist, or simply curious about how these powerful AI models are…

Stay Ahead with Kubernetes: Master Cost and Compliance with Kyverno and Kubecost

Stay Ahead with Kubernetes: Master Cost and Compliance with Kyverno and Kubecost

Date : May 30Time : 9:00 – 10:00 AM PDT / 12:00 noon – 1:00 PM EDT / 4:00 – 5:00 PM GMT In the fast-evolving world of cloud-native applications, managing Kubernetes environments efficiently and cost-effectively is crucial. Our upcoming webinar is designed to guide you through leveraging two powerful tools, Kyverno and Kubecost, running…

Containerized Offline Workflows: Leveraging Kubernetes for Distributed Argo Workflows

Containerized Offline Workflows: Leveraging Kubernetes for Distributed Argo Workflows

This blog post explores how Kubernetes clusters equipped with Argo Workflows can be leveraged for orchestrating offline tasks and batch computing jobs in a cloud-native way. Traditional Batch Computing vs. Kubernetes Clusters While mainstream batch computing systems offer solutions for managing and executing batch jobs, they often require users to: Kubernetes Clusters for Distributed Argo…

AI Infrastructure Mastery: Collaborative Approaches for Success
AI

AI Infrastructure Mastery: Collaborative Approaches for Success

Navigating the Challenges of Rapid AI/ML Advancements Designing a singular AI/ML system amidst the rapid advancements in applications and models, such as XGboost, deep learning recommendation models, and large language models (LLMs), presents significant challenges. Each type of model has unique demands: LLMs require high TFlops for processing, while deep learning models often encounter memory…