gRPC load balancing using Enroute API Gateway

gRPC

gRPC is a remote procedure call framework open sourced by google. It is extremely popular choice over REST because of its performance benefits and its flexibile design.

Envoy support for GRPC

Envoy has first class support for GRPC. It can proxy GRPC traffic and load balance it across multiple upstreams.

Enroute gateway is built on Envoy proxy. It simplifies configuring Envoy proxy as an API gateway. In steps below, we demonstrate how easy it is to configures gRPC load balancing. Once setup, canary deployments can also be achieved across services. See the cookbook article on Canary Deployments for details.

To get more details on architecture of Enroute, check out the FAQ section.

Enroute Universal Gateway

Enroute Universal Gateway is a flexible API gateway built to support traditional and cloud-native use cases. It is designed to run either as an Kubernetes Ingress Gateway, Standalone Gateway, Horizontally scaling L7 API gateway or a Mesh of Gateways. Enroute can support a wide range of topologies. Depending on the need of the user, the environment, the application, either one or many of these solutions can be deployed.

A consistent policy framework across all these network components makes the Enroute Universal Gateway a versatile and powerful solution.

This article covers how to get started with the Enroute Standalone Gateway.

To get a more detailed understanding of Enroute Universal Gateway and its architecture, refer to the article here

What this article covers.

This article covers how Enroute can be run standalone without a kubernetes cluster. It is an API gateway built on top of cloud-native Envoy proxy. The configuration model is consistent with the Enroute Kubernetes Ingress Gateway. The Enroute Standalone Gateway is packaged as a docker image.

Envoy has concept of listeners, routes and clusters. Listeners have one or more routes that direct traffic to clusters. Clusters have upstream servers as members to which we load-balance traffic.

Here we demonstrate how simple REST API calls made to the Enroute gateway configure gRPC load balancing. We create a service, routes for that service and upstreams for that routes. These abstractions configure listener, route clusters and endpoints on Envoy proxy. Any traffic sent to the created Envoy listener is then sent to one of the clusters depending on matching route and weight configured for the endpoint (or upstream).

Subsequent steps demonstrate Enroute gateway abstractions and API calls to achieve this.

Getting started with Enroute Standalone API gateway in less than a minute.

This section demonstrates how to quickly setup and run a simple example with Standalone Gateway. The quickstart example is also listed in more detail below.

Download the quickstart bash script

curl -O https://raw.githubusercontent.com/saarasio/gettingstarted/master/standalone/gs
Topology for this example

Simple topology standalone

Start Enroute Standalone API Gateway
docker run --net=host saarasio/enroute-gw:v0.4.1
Create Config on standalone Gateway: Setup service, route, upstream, filters and globalconfig
./gs create-grpc
Start server
./gs start-server-grpc
Send some traffic
./gs send-grpc-traffic
View config
./gs show
Delete config
./gs delete

Setting up GRPC load balancing

In the next few steps, we show how easy it is to configure gRPC load balancing using Enroute Gateway. Note that the same can be achieved by running the enroute data plane and enroute control plane separately.

At a high level, we’ll perform the following steps -

  • Run the Enroute gateway (enroute-gw)
  • Create service demo
  • Create route with prefix /
  • Create upstream attached to this route
  • The above steps will configure the Envoy proxy with configuration provided on the Enroute control plane

Start Enroute GW enroute-gw

The gateway is packaged as a docker image that can be run using the following command -

sudo docker run --net=host saarasio/enroute-gw:v0.4.1

This starts the gateway with in the host privileged mode. The following ports are of interest -

  • 1323 - REST API port. Used to create state on the controller
  • 8080 - Listener port for http setup
  • 8443 - Listener port for https setup

Create Proxy, Service, Route, Upstream, Filter using APIs

We use the Enroute API to perform the following tasks -

Enroute standalone gateway expects one-time creation of proxy object with name gw:

Create Proxy
$ curl -s -X POST "http://localhost:1323/proxy" -d 'Name=gw' | jq
{
    "name": "gw"
}
Create Service and attach it with proxy
$ curl -s -X POST "http://localhost:1323/service" -d 'Service_Name=demo' -d 'fqdn=enroute.local' | jq
{
    "data": {
        "insert_saaras_db_service": {
            "affected_rows": 1
        }
    }
}

$ curl -s -X POST "http://localhost:1323/proxy/gw/service/demo" | jq
{
    "data": {
        "insert_saaras_db_proxy_service": {
            "affected_rows": 3
        }
    }
}
Create Route
$ curl -s -X POST "http://localhost:1323/service/demo/route"	\
    -d 'Route_Name=gs_route'                                    \
    -d 'Route_prefix=/' | jq
{
    "data": {
        "insert_saaras_db_route": {
            "affected_rows": 2
        }
    }
}
Create Upstream and associate it with route
$ curl -s -X POST "http://localhost:1323/upstream" \
    -d 'Upstream_name=server1'                     \
    -d 'Upstream_ip=127.0.0.1'                     \
    -d 'Upstream_port=50051'                       \
    -d 'Upstream_hc_path=/'                        \
    -d 'Upstream_protocol=grpc'                    \
    -d 'Upstream_weight=100' | jq
{
    "data": {
        "insert_saaras_db_upstream": {
            "affected_rows": 1
        }
    }
}

$ curl -s -X POST "http://localhost:1323/service/demo/route/gs_route/upstream/server1" | jq
{
    "data": {
        "insert_saaras_db_route_upstream": {
            "affected_rows": 4
        }
    }
}

Note the upstream protocol set to grpc above.

Create Global Lua Filter

$ curl -s -X POST localhost:1323/filter -d 'Filter_name=lua_filter_1' -d 'Filter_type=http_filter_lua' | jq
{
  "data": {
    "insert_saaras_db_filter": {
      "affected_rows": 1
    }
  }
}
Read lua script for filter from file
$ curl -X POST -F 'Config=@script.lua' http://localhost:1323/filter/lua_filter_1/config | jq
{
  "data": {
    "update_saaras_db_filter": {
      "affected_rows": 1
    }
  }
}
Show contents of lua script
$ cat script.lua
function envoy_on_request(request_handle)
   request_handle:logInfo("Hello World request");
end

function envoy_on_response(response_handle)
   response_handle:logInfo("Hello World response");
end
Show contents of filter
$ curl -s localhost:1323/filter/lua_filter_1 | jq
{
  "data": {
    "saaras_db_filter": [
      {
        "filter_id": 137,
        "filter_name": "lua_filter_1",
        "filter_type": "http_filter_lua",
        "filter_config": {
          "config": "function envoy_on_request(request_handle)\n   request_handle:logInfo(\"Hello World request\");\nend\n\nfunction envoy_on_response(response_handle)\n   response_handle:logInfo(\"Hello World response\");\nend\n"
        }
      }
    ]
  }
}
Attach the lua filter to service
$ curl -s -X POST localhost:1323/service/demo/filter/lua_filter_1 | jq
{
  "data": {
    "insert_saaras_db_service_filter": {
      "affected_rows": 3
    }
  }
}

Create Per-route rate-limit Filter

$curl -X POST localhost:1323/filter -d Filter_name='route_rl_1' -d Filter_type='route_filter_ratelimit' | jq
{
  "data": {
    "insert_saaras_db_filter": {
      "affected_rows": 1
    }
  }
}
Read rate limit filter config from file
$ curl -s -X POST localhost:1323/filter/route_rl_1/config -F 'Config=@route_rl_1.json' | jq
{
  "data": {
    "update_saaras_db_filter": {
      "affected_rows": 1
    }
  }
}
Show contents of rate limit config
$ cat route_rl_1.json
{
  "descriptors" :
  [
    {
      "generic_key":
      {
        "descriptor_value":"default"
      }
    }
  ]
}
Show contents of filter
$ curl -s localhost:1323/filter/route_rl_1 | jq
{
  "data": {
    "saaras_db_filter": [
      {
        "filter_id": 139,
        "filter_name": "route_rl_1",
        "filter_type": "route_filter_ratelimit",
        "filter_config": {
          "descriptors": [
            {
              "generic_key": {
                "descriptor_value": "default"
              }
            }
          ]
        }
      }
    ]
  }
}

Attach the rate limit filter to service

$ curl -s -X POST localhost:1323/service/demo/route/gs_route/filter/route_rl_1 | jq
{
  "data": {
    "insert_saaras_db_route_filter": {
      "affected_rows": 4
    }
  }
}

Create GlobalConfig for rate-limit Filter

Create the globalconfig object

$ curl -s -X POST localhost:1323/globalconfig -d 'globalconfig_name=gc1' -d 'globalconfig_type=globalconfig_ratelimit' | jq
{
  "data": {
    "insert_saaras_db_globalconfig": {
      "affected_rows": 1
    }
  }
}

Read globalconfig from a json file

$ curl -s -X POST localhost:1323/globalconfig/gc1/config -F 'Config=@gc1.json' | jq
{
  "data": {
    "update_saaras_db_globalconfig": {
      "affected_rows": 1
    }
  }
}

Show globalconfig

$ curl -s localhost:1323/globalconfig/t1 | jq
{
  "data": {
    "saaras_db_globalconfig": [
      {
        "globalconfig_id": 237,
        "globalconfig_name": "gc1",
        "globalconfig_type": "globalconfig_ratelimit",
        "config_json": {
          "domain": "enroute",
          "descriptors": [
            {
              "key": "generic_key",
              "value": "default",
              "rate_limit": {
                "unit": "second",
                "requests_per_unit": 10
              }
            }
          ]
        }
      }
    ]
  }
}

Show contents of globalconfig file

$ cat gc1.json
{
  "domain": "enroute",
  "descriptors" :
  [
    {
      "key" : "generic_key",
      "value" : "default",
      "rate_limit" :
      {
        "unit" : "second",
        "requests_per_unit" : 10
      }
    }
  ]
}

Associate GlobalConfig to Proxy

$ curl -s -X POST localhost:1323/proxy/gw/globalconfig/gc1 | jq
{
  "data": {
    "insert_saaras_db_proxy_globalconfig": {
      "affected_rows": 3
    }
  }
}

Dump service
$ curl -s localhost:1323/service/dump/demo | jq
{
  "data": {
    "saaras_db_service": [
      {
        "service_id": 1,
        "service_name": "demo",
        "fqdn": "127.0.0.1",
        "create_ts": "2020-04-21T01:42:33.849806+00:00",
        "routes": [
          {
            "route_id": 1,
            "route_name": "gs_route",
            "route_upstreams": [
              {
                "upstream": {
                  "upstream_id": 1,
                  "upstream_name": "grpc-server",
                  "upstream_ip": "127.0.0.1",
                  "upstream_port": 50053
                }
              }
            ],
            "route_filters": [
              {
                "filter": {
                  "filter_name": "route_rl_filter",
                  "filter_type": "route_filter_ratelimit"
                }
              }
            ],
            "route_prefix": "/echo/HelloWorld"
          }
        ],
        "service_secrets": [],
        "service_filters": [
          {
            "filter": {
              "filter_name": "global_lua_filter",
              "filter_type": "http_filter_lua"
            }
          }
        ]
      }
    ]
  }
}
Dump proxy ‘gw’ config
$ curl -s http://localhost:1323/proxy/dump/gw | jq
{
  "data": {
    "saaras_db_proxy": [
      {
        "proxy_name": "gw",
        "proxy_globalconfigs": [
          {
            "globalconfig": {
              "globalconfig_name": "gc",
              "globalconfig_type": "globalconfig_ratelimit"
            }
          }
        ],
        "proxy_services": [
          {
            "service": {
              "service_name": "demo",
              "fqdn": "127.0.0.1",
              "service_secrets": [],
              "routes": [
                {
                  "route_name": "gs_route",
                  "route_prefix": "/echo/HelloWorld",
                  "route_filters": [
                    {
                      "filter": {
                        "filter_name": "route_rl_filter",
                        "filter_type": "route_filter_ratelimit"
                      }
                    }
                  ],
                  "route_upstreams": [
                    {
                      "upstream": {
                        "upstream_name": "grpc-server",
                        "upstream_ip": "127.0.0.1",
                        "upstream_port": 50053
                      }
                    }
                  ]
                }
              ]
            }
          }
        ]
      }
    ]
  }
}

Test gRPC load balancing

Build gRPC client/server by cloning repository here:

git clone https://github.com/saarasio/gettingstarted.git
cd standalone && mkdir -p build && cd build && cmake .. && make && cd ..

Run gRPC server on port 50053

./bin/grpc_client_server -role server -host 127.0.0.1 -port 50053

Run gRPC client and point it to the new listener created on 8080

./bin/grpc_client_server -role client -host 127.0.0.1 -port 8080 -id 3