The relevance of entry points within Gatekeeper and their value to a team
I am following on from my last post about Gatekeeper and the Kubernetes API-Server pipeline. Another point of discussion would be entry points and additional effort, adding an enhanced verbosity level for your developers.
What is an entry point?
An entry point is a point at which you reference the return object for evaluation. An example is below, comparing the entry point for a pod versus the entry point for deployment.
As you can see, to access the equivalent object within a deployment the entry point, reference to a deeper nesting is required. This difference is due to the difference in API groups. Therefore, a deployment has its containers defined at a deeper nested level within the YAML/JSON.
However, upon closer inspection, we see that a ConstraintTemplate which exclusively applies for Pods is sufficient for Deployments within Kubernetes. Why is this? Well, breaking it down, a Deployment is nothing but a versioned ReplicaSet that contains Pods. The ReplicaSet is responsible for deploying pods which evaluated by Gatekeeper are then denied.
We can indeed see that the pod violation captures when the ReplicaSet attempts to deploy the pods; however, the reason for this failure isn’t apparent at the deployment level.
The issue arises where, while this architecture might be evident to a Platform engineer, an Application side developer may not be experienced enough with the inner workings of Kubernetes or Gatekeeper to understand. Furthermore, these intricacies could confuse why the pods fail to start and know why you have to get the logs from the pod specifically.
Deployment Entry Point
When we define a Deployment violation rule with a Constraint capturing the Deployment API group, however, the following occurs when we try to deploy a Deployment.
The same error occurs with similar resources that have matching API groups (Daemonsets, Statefulsets). So now, the reason your deployment is failing is immediately apparent. There’s no time wasted digging around for potential reasons by any devs who are new to working within the Kubernetes world.
Takeaways;
Ultimately, there is a little more work involved with creating additional violations. Test suites need to expand to accommodate them. In addition, there’s no guarantee implementing the Rego logic will be as easy as the scenario above.
However, the increased level of verbosity can be beneficial on a large scale with many global policies enforced by many consumers.