Skip to content

Ingress configuration - APISIX and Nginx #66

@blaziq

Description

@blaziq

The new ingress controller configuration described in the deployment guide:
https://eoepca.readthedocs.io/projects/deploy/en/2.0-beta/prerequisites/ingress-controller/
is not entirely clear.

We have the following issues with that:

  1. From what we understand, APISIX is intended to replace the previously used Nginx as the default ingress controller for the EOEPCA cluster and a method to deploy APISIX is provided. There is also a possibility to use Nginx as a proxy service for APISIX and a deployment method is provided as well. The question is: what if we want or have to use Nginx because it is already preconfigured in the cluster we use? For instance, our test cluster is build upon Rancher Kubernetes Engine and comes with Nginx preinstalled in the kube-system namespace. In our case we have full access to the cluster and we could modify that to enable APISIX instead but there might be instances of cluster provided in the PaaS mode where the user cannot change the default ingress service that comes with it.
  • Is the use of APISIX mandatory
  • How does one configure an existing Nginx service to be used alongside APISIX?
  • The ingress service exposes certain ports (for HTTP and HTTPS traffic) which are then mapped in the load balancer to ports 80 and 443 respectively (in our case the LB is HAProxy running on a dedicated VM). How to avoid port conflicts or having to set up two separate load balancers in case APISIX and Nginx are both running?
  1. I can see that new sections have appeared in the deployment guide for individual EOEPCA components e.g.

Note the ingress for the OAPIP Engine is established using APISIX resources (ApisixRoute, ApisixTls). Ensure that you have the APISIX Ingress Controller installed and configured in your cluster, as described in the APISIX Ingress Controller section.

It is not clear, however, whether all components have been adapted to the APISIX ingress controller by adding appropriate APISIX routes in their deployment configuration.

  1. The APISIX ingress controller deployed with the installation method described in the deployment guide runs as one pod on the control plane. Our configuration of Nginx deploys an instance of Nginx on each worker node and the HAProxy load balancer is configured to route the traffic in round robin mode to each of them. Is the deployment of the ingress controller on the control plane the right way to do it? In this case, what is the purpose of the external load balancer (provided by the platform or set up externally)?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions