Program OpenAPI (Swagger) Spec on Gateway in 30 seconds
30-seconds to OpenAPI Automation on Enroute Standalone Gateway
enroutectl openapi --openapi-spec petstore.json --to-standalone-url http://localhost:1323/
Flatten the learning curve to run Envoy in your enterprise.
Up and Running in less than a minute
curl -X POST "http://localhost:1323/service" \ -d 'Service_Name=openapi-enroute' \ -d 'Fqdn=saaras.io' curl -X POST "http://localhost:1323/service/openapi-enroute/route" \ -d 'Route_Name=root-slash' \ -d 'Route_prefix=/' curl -X POST "http://localhost:1323/upstream" \ -d 'Upstream_name=openapi-upstream' \ -d 'Upstream_ip=openapi.example.com' \ -d 'Upstream_port=9001' \ -d 'Upstream_hc_path=/' \ -d 'Upstream_weight=100'
Stateful or Stateless
Run with a state store or completely stateless. Use state from git or CRDs to drive Enroute
Extend using Global HTTP Filters and Route Filters
Similar to Envoy, Enroute's flexible architecture allows adding functionality as HTTP-filter and Route-filter
Are you paying too much for your API Gateway?Enroute is Open Source with Community Edition that includes premium features
Enroute is open source and community edition ships with premium features like OpenAPI Spec Support, Advanced rate limiting (per-user, authenticated/un-authenticated user), canary support and many more
Enroute is built on Envoy Proxy and provides the raw performance for APIs that need low latency and high throughput
Digital transformation is a key initiative in organizations to meet business requirements . This need is driving cloud adoption with a more self-serve DevOps driven approach. Application and micro-services are run in Kubernetes and in public/private cloud with an automated continous delivery pipeline.
As applications undergo this change, traditional API gateways are retrofitted to meet the changing requirements. This has resulted in multiple API gateways and different solutions that work only in a subset of use-cases. An API gateway that works for traditional as well as new cloud-native use-cases is critical as an application undergoes this transition.
Enroute is built from ground-up to support both traditional and cloud native use-cases. Enroute can be deployed in multiple topolgies to meet the demands of todays application, regardless of where an application is in the cloud journey. The same gateway can be deployed outside kubernetes in Standalone mode for traditional use cases, a kubernetes ingress controller and also inside kubernetes.
Enroute's powerful API brings automation and enables developer operations to treat infrastructure as code. Enroute natively supports advanced rate-limiting and lua scripting as extensible filters or plugins that can be attached either for per-route traffic or all traffic. Enroute API Gateway control plane drives one or many stateless data planes built using Envoy Proxy
In today's API economy, it is vital that all APIs are protected from abuse. Rate-limiting is a fundamental requirement for today's APIs.
Enroute Universal Gateway includes advanced rate-limiting for any service, any platform, any cloud.
Enroute gateway can be run as kubernetes ingress gateway or as a standalone gateway for services outside kubernetes
Low latency and High throughput
Configure Enroute using CRDs for kubernetes, or using simple REST API on control plane to configure multiple stateless data planes
Enroute's extensible filter architecture facilitates fine-grained control to add functionality at global level or on a per-route basis
Bridge to cloud-native
Enroute is the only gateway that can run either as a kubernetes ingress gateway or in standalone mode. This provides architectural flexibility while transitioning to micro-services or during cloud adoption.
Advanced rate-limiting, canary deployments and programmability
Enroute supports per-route advanced rate-limiting, canary deployments and has lua scripting support