Engineering Blog

                            

Blog – 4 Column

Optimize Your DevOps: Explore Codefresh’s Powerful New Tools
Optimize Your DevOps: Explore Codefresh’s Powerful New Tools

Date : June 11Time : 9-11 am PDT Are you exhausted from endlessly writing scripts, configuring CI/CD pipelines, and copying and pasting to promote changes between environments? Codefresh has a solution that will revolutionize your workflow. We are thrilled to introduce Environments and Products, designed to simplify and streamline the process of promoting changes across…

DORA Metrics: Your Path to Agile Excellence
DORA Metrics: Your Path to Agile Excellence

Date : May 29Time : 2-3 pm PSTPresenters : Paul O’Reilly , Nathen Harvey In the world of software development, DORA Metrics have become essential for measuring and improving performance. These metrics, developed by the DevOps Research and Assessment (DORA) team, consist of four key indicators: Business Benefits of DORA Metrics Practical Applications Why DORA…

GreptimeDB v0.8: Unleash the Power of Continuous Aggregation with the Flow Engine
GreptimeDB v0.8: Unleash the Power of Continuous Aggregation with the Flow Engine

GreptimeDB takes a significant leap forward with the release of v0.8, introducing the revolutionary Flow Engine! This innovative feature empowers you to perform real-time, stream-based aggregation computations, unlocking valuable insights from your time-series data. Continuous Aggregation Made Simple The Flow Engine streamlines how you analyze your data. It allows you to continuously calculate and materialize…

Fast and Secure: Building Strong AppSec in Your Existing Development Workflow
Fast and Secure: Building Strong AppSec in Your Existing Development Workflow

Is your rush to market leaving your software vulnerable? In today’s fast-paced world, the pressure to get software out the door quickly can sometimes overshadow critical security considerations. This can lead to a software supply chain riddled with vulnerabilities, exposing your applications to internal and external threats. Join this insightful webinar on May 30, 3:15 – 5:15 ,…

Scaling AI/ML Infrastructure at Uber
AI
Scaling AI/ML Infrastructure at Uber

Scaling AI/ML Infrastructure at Uber: Optimizing for Efficiency and Growth This blog post explores Uber’s journey in scaling its AI/ML infrastructure to support a rapidly evolving landscape of applications. As Uber’s models have grown in complexity, from XGBoost to deep learning and generative AI, the need for efficient and adaptable infrastructure has become crucial. Optimizing…

Powering Up Generative AI with Real-Time Streaming
AI
Powering Up Generative AI with Real-Time Streaming

This blog post dives into the world of generative AI and how real-time streaming data enhances its capabilities. Generative AI models, like large language models (LLMs), excel at various tasks such as text generation, chatbots, and summarization. However, they traditionally rely on static datasets, limiting their adaptability to ever-changing information. Why Streaming Data Matters Streaming data…

Embrace Agility: Building Flexible Data Workflows with Portable ETL
Embrace Agility: Building Flexible Data Workflows with Portable ETL

Imagine a world where ETL pipelines run seamlessly across any environment – from robust servers to resource-constrained edge devices. This is the future promised by portable ETL, a revolutionary approach that prioritizes flexibility and adaptability. This blog post explores the limitations of traditional ETL frameworks and how portable ETL empowers data teams to: Beyond Traditional…

Effortless Kubernetes Management: Akamai Now Supports Cluster API
Effortless Kubernetes Management: Akamai Now Supports Cluster API

Exciting news for developers and IT professionals who leverage Kubernetes! Akamai recently announced support for the Kubernetes Cluster API (CAPL), streamlining cluster creation, configuration, and management. What is Cluster API (CAPI)? CAPI, or “kappy” for short, is an open-source project that introduces a declarative approach to managing Kubernetes clusters. Similar to Infrastructure as Code (IaC)…

LLM Development Unplugged: A Practical Guide from Code to Deployment
LLM Development Unplugged: A Practical Guide from Code to Deployment

Date : June 5Time : 1:00 PM ET/17:00 UTCPresenters : Sebastian Raschka , Marlene Mhangami Introduction: Join us for an insightful talk that will walk you through the crucial stages involved in developing large language models (LLMs). Whether you’re a seasoned developer, a data scientist, or simply curious about how these powerful AI models are…

Stay Ahead with Kubernetes: Master Cost and Compliance with Kyverno and Kubecost
Stay Ahead with Kubernetes: Master Cost and Compliance with Kyverno and Kubecost

Date : May 30Time : 9:00 – 10:00 AM PDT / 12:00 noon – 1:00 PM EDT / 4:00 – 5:00 PM GMT In the fast-evolving world of cloud-native applications, managing Kubernetes environments efficiently and cost-effectively is crucial. Our upcoming webinar is designed to guide you through leveraging two powerful tools, Kyverno and Kubecost, running…

Containerized Offline Workflows: Leveraging Kubernetes for Distributed Argo Workflows
Containerized Offline Workflows: Leveraging Kubernetes for Distributed Argo Workflows

This blog post explores how Kubernetes clusters equipped with Argo Workflows can be leveraged for orchestrating offline tasks and batch computing jobs in a cloud-native way. Traditional Batch Computing vs. Kubernetes Clusters While mainstream batch computing systems offer solutions for managing and executing batch jobs, they often require users to: Kubernetes Clusters for Distributed Argo…

AI Infrastructure Mastery: Collaborative Approaches for Success
AI
AI Infrastructure Mastery: Collaborative Approaches for Success

Navigating the Challenges of Rapid AI/ML Advancements Designing a singular AI/ML system amidst the rapid advancements in applications and models, such as XGboost, deep learning recommendation models, and large language models (LLMs), presents significant challenges. Each type of model has unique demands: LLMs require high TFlops for processing, while deep learning models often encounter memory…