-
Notifications
You must be signed in to change notification settings - Fork 34
Flows do not persist pod restart #201
Description
Type of question
Are you asking about community best practices, how to implement a specific feature, or about general context and help around nifikop ?
General help with Nifikop.
Question
What did you do?
I deployed Nifi with 2 pods via NifiKops. After creating a flow on the UI, I exported the process groups to a nifi-registry as well. The cluster run for days. This is the CR I used. I then deleted the cluster pods to test resilience.
apiVersion: nifi.orange.com/v1alpha1
kind: NifiCluster
metadata:
name: simplenifi
namespace: dataops
spec:
service:
headlessEnabled: true
zkAddress: "zookeeper.dataops.svc.cluster.local.:2181"
zkPath: "/simplenifi"
clusterImage: "apache/nifi:1.12.1"
oneNifiNodePerNode: false
nodeConfigGroups:
default_group:
isNode: true
imagePullPolicy: IfNotPresent
storageConfigs:
- mountPath: "/opt/nifi/nifi-current/logs"
name: logs
pvcSpec:
accessModes:
- ReadWriteOnce
storageClassName: "gp2"
resources:
requests:
storage: 10Gi
serviceAccountName: "default"
resourcesRequirements:
limits:
cpu: "0.5"
memory: 2Gi
requests:
cpu: "0.5"
memory: 2Gi
clientType: "basic"
nodes:
- id: 1
nodeConfigGroup: "default_group"
- id: 2
nodeConfigGroup: "default_group"
propagateLabels: true
nifiClusterTaskSpec:
retryDurationMinutes: 10
listenersConfig:
internalListeners:
- type: "http"
name: "http"
containerPort: 8080
- type: "cluster"
name: "cluster"
containerPort: 6007
- type: "s2s"
name: "s2s"
containerPort: 10000What did you expect to see?
I expected the cluster to run properly and survive restarts since PVs are created. I expected to see the pipelines continue running after the pods started up.
What did you see instead? Under which circumstances?
When the pods came back up and were healthy, the UI had no flows or process groups. The registry configuration had also disappeared. I have to manually re-register the nifi-registry, re-import the process groups, add the secrets and restart the pipelines.
- Why would this happen when Nifi has persistent volumes?
- How can this behaviour be stopped?
- How can I persist the flows or at least automate the re-importing and restarting of pipelines from nifi-registry.
Environment
-
nifikop version:
v0.7.5-release -
Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
- NiFi version:
apache/nifi:1.12.1