Engineering Blog

Migrating from Kubernetes Ingress to Gateway API: A Zero-Downtime Production Success Story

Kubernetes networking has reached a critical inflection point. For years, the Ingress API together with controllers like NGINX Ingress powered most production workloads. That chapter is now closing.

As of March 2026, the widely used NGINX Ingress Controller has reached End-of-Life, with no further security updates. Organizations still running it face growing security and compliance risks.

One team recently completed a full production migration to the Kubernetes Gateway API and shared the complete journey — including architectural decisions, implementation details, and a flawless zero-downtime cutover — in a new YouTube video.

Here’s what they accomplished and how you can follow their lead.

Why Migrate Now?

The decision was driven by hard realities rather than hype:

  • Security: NGINX Ingress no longer receives patches, making it unacceptable for production use.
  • Separation of Concerns: Gateway API provides a clear boundary between infrastructure operators (who manage Gateway and GatewayClass) and application teams (who manage HTTPRoute, TLSRoute, etc.).
  • Modern Protocol Support: Native, first-class support for WebSockets, gRPC, TCP, and UDP — eliminating “annotation hell.”

Choosing the Right Implementation

After evaluating Traefik, Istio, and Envoy Gateway, the team selected Envoy Gateway for several compelling reasons:

  • Strongest Gateway API conformance scores
  • Excellent native WebSocket support
  • Clean, declarative TLS termination
  • Seamless integration with their existing cloud load balancer

Side-by-Side Comparison

FeatureNGINX Ingress (Legacy)Envoy Gateway
Configuration StyleHeavy annotationsNative Gateway API objects
Team ResponsibilitiesMonolithicClear split (Infrastructure vs App)
Protocol SupportAnnotation/plugin-basedNative (HTTP, gRPC, WebSocket, TCP…)
Security & RBACBroad permissionsGranular, secure-by-default
ConformanceN/AExcellent core conformance

How They Executed the Migration

The team followed a low-risk, high-confidence approach:

  1. Automated Conversion Used the official ingress-to-gateway tool to translate existing Ingress resources into Gateway API manifests.
  2. Side-by-Side Deployment Deployed Envoy Gateway alongside the old NGINX controller, both pointing to the different load balancer IP.
  3. Progressive Validation Gradually shifted dev and production traffic while monitoring performance and error rates.
  4. Zero-Downtime Cutover The final switch was elegantly simple — a DNS record update. Because both systems shared the same IP address, traffic transitioned seamlessly with zero downtime and no risk of blackholing.

Key Lessons Learned

  • Start with the ingress-to-gateway conversion tool — it handles 80-90% of the work.
  • Leverage your existing load balancer for a safe side-by-side period.
  • Test WebSocket and gRPC early — these are common gotchas.
  • The shared-IP strategy makes DNS cutover remarkably safe.

The Clock Is Ticking

If you’re still running legacy Ingress, the comfortable window for a calm transition is closing fast. The tools are mature, the patterns are battle-tested, and the security risks of staying on an unsupported controller are growing daily.

Gateway API is no longer experimental — it is the new standard for Kubernetes traffic management.

Watch the full technical deep-dive here: Migration from Ingress to Gateway API

The video includes exact commands, real YAML examples, monitoring insights, and the step-by-step zero-downtime DNS cutover. Highly recommended for any team planning this migration.

Have you started your Gateway API migration yet? What challenges are you facing? Share your thoughts in the comments below.

Previous Post