1. Introduction
1.导言
In this tutorial, we’ll present the concept of Kubernetes operators and how we can implement them using the Java Operator SDK. To illustrate this, we’ll implement an operator that simplifies the task of deploying an instance of the OWASP’s Dependency-Track application to a cluster.
在本教程中,我们将介绍 Kubernetes 操作员的概念,以及如何使用 Java 操作员 SDK 实现它们。为了说明这一点,我们将实现一个操作符,简化将 OWASP 的 Dependency-Track 应用程序实例部署到群集的任务。
2. What Is a Kubernetes Operator?
2.什么是 Kubernetes 操作员?
In Kubernetes parlance, an Operator is a software component, usually deployed in a cluster, that manages the lifecycle of a set of resources. It extends the native set of controllers, such as replicaset and job controllers, to manage complex or interrelated components as a single-managed unit.
在 Kubernetes 术语中,操作员是一个软件组件,通常部署在集群中,用于管理一组资源的生命周期。它扩展了原生控制器集,例如 replicaset 和 job 控制器,以便将复杂或相互关联的组件作为单一管理单元进行管理。
Let’s look at a few common use cases where operators are used:
让我们来看看使用运算符的几个常见用例:
- Enforce best practices when deploying applications to a cluster
- Keep track and recover from accidentally removing/changing resources used by an application
- Automate housekeeping tasks associated with an application, such as regular backups and cleanups
- Automate off-cluster resource provisioning — for example, storage buckets and certificates
- Improve application developers’ experience when interacting with Kubernetes in general
- Improve overall security by allowing users to manage only application-level resources instead of low-level ones such as pods and deployments
- Expose application-specific resources (a.k.a. Custom Resource Definitions) as Kubernetes resources
This last use case is quite interesting. It allows a solution provider to leverage the existing practices around regular Kubernetes resources to manage application-specific resources. The main benefit is that anyone adopting this application can use existing infrastructure-as-code tools.
最后一个用例非常有趣。它允许解决方案提供商利用围绕常规 Kubernetes 资源的现有实践来管理特定于应用程序的资源。其主要优势在于,采用此应用的任何人都可以使用现有的基础设施即代码工具。
To give us an idea of the different kinds of available operators, we can check the OperatorHub.io site. There, we’ll find operators for popular databases, API managers, development tools, and others.
为了让我们对不同类型的可用操作符有所了解,我们可以查看 OperatorHub.io 网站。在那里,我们可以找到流行数据库、API 管理器、开发工具等的操作符。
3. Operators and CRDs
3.运营商和 CRD
Custom Resource Definitions, or CRDs for short, are a Kubernetes extension mechanism that allows us to store structured data in a cluster. As with almost everything on this platform, the CRD definition itself is also a resource.
自定义资源定义(简称 CRD)是 Kubernetes 的一种扩展机制,它允许我们在集群中存储结构化数据。与该平台上的几乎所有东西一样,CRD 定义本身也是一种资源。
This meta-definition describes the scope of a given CRD instance (namespace-based or global) and the schema used to validate CRD instances. Once registered, users can create CRD instances as if they were native ones. Cluster administrators can also include CRDs as part of role definitions, thus granting access only to authorized users and applications.
该元定义描述了给定 CRD 实例的范围(基于命名空间或全局)以及用于验证 CRD 实例的模式。注册后,用户就可以像创建本地实例一样创建 CRD 实例。群集管理员还可以将 CRD 作为角色定义的一部分,从而只允许授权用户和应用程序访问。
Now, registering a CRD on itself does not imply that Kubernetes will use it in any way. As far as Kubernetes is concerned, a CRD instance is just an entry in its internal database. Since none of the standard Kubernetes native controllers know what to do with it, nothing will happen.
在自己身上注册 CRD 并不意味着 Kubernetes 将以任何方式使用它。就 Kubernetes 而言,CRD 实例只是其内部数据库中的一个条目。由于标准的 Kubernetes 本机控制器都不知道该如何处理它,所以什么也不会发生。
This is where the controller part of an operator comes into play. Once deployed, it will watch for events related to the corresponding custom resources and act in response to them.
这就是操作员控制器发挥作用的地方。一旦部署完成,它将监控与相应自定义资源相关的事件,并对这些事件做出响应。
Here, the act part is the important one. The terminology is inspired by Control Theory, which can be summarized in the following diagram:
在这里,行为部分是重要的部分。该术语受 控制理论启发,可概括为下图:
4. Implementing an Operator
4.执行操作器
Let’s review the main tasks we have to complete to create an Operator:
让我们回顾一下创建操作员需要完成的主要任务:
- Define a model of the target resources we’ll manage through the operator
- Create a CRD that captures the required parameters needed to deploy those resources
- Create a controller that watches a cluster for events related to the registered CRD
For this tutorial, we’ll implement an operator for the OWASP flagship project, Dependency-Track. This application allows users to track vulnerabilities in libraries used across an organization, thus allowing software security professionals to evaluate and address any issues found.
在本教程中,我们将为 OWASP 旗舰项目 Dependency-Track 实现一个操作符。该应用程序允许用户跟踪整个组织中使用的库中的漏洞,从而使软件安全专业人员能够评估和解决发现的任何问题。
Dependency-Track’s Docker distribution consists of two components: API and frontend services, each with its own image. When deploying them to a Kubernetes cluster, the common practice is to wrap each one in a Deployment to manage the Pods that run these images.
Dependency-Track 的 Docker 发行版由两部分组成:API 和前端服务,每个服务都有自己的映像。将它们部署到 Kubernetes 集群时,通常的做法是将每个组件封装在 Deployment 中,以管理运行这些映像的 Pods 。
That’s not all, however. We also need a few extra resources for a complete solution:
然而,这还不是全部。我们还需要一些额外的资源来提供完整的解决方案:
- Services to act as load balancers in front of each Deployment
- An Ingress to expose the application to the external world
- A Persistent Volume claim to store vulnerability definitions downloaded from public sources
- ConfigMap and Secret resources to store generic and sensitive parameters, respectively
Moreover, we also need to properly set liveness/readiness probes, resource limits, and other minutiae that a regular user should not be concerned about.
此外,我们还需要正确设置活跃度/准备状态探测、资源限制以及其他普通用户不应该关心的细枝末节。
Let’s see how we can simplify this task with an Operator.
让我们看看如何通过操作符来简化这项任务。
5. Defining the Model
5.定义模型
Our operator will focus on the minimal set of resources needed to run a Dependency-Track system. Fortunately, the provided images have sensible default values, so we only need one piece of information: the external URL used to access the application.
我们的操作员将重点关注运行依赖关系跟踪系统所需的最小资源集。幸运的是,所提供的图像具有合理的默认值,因此我们只需要一个信息:用于访问应用程序的外部 URL。
This leaves database and storage settings out for now, but once we get the basics right, adding those features is straightforward.
目前还没有数据库和存储设置,但一旦我们掌握了基础知识,添加这些功能就会变得简单易行。
We will, however, leave some leeway for customization. In particular, it’s convenient to allow users to override the image and version used for the deployments, as they’re constantly evolving.
不过,我们会留出一些自定义的余地。特别是,由于部署所使用的镜像和版本会不断变化,因此允许用户覆盖这些镜像和版本也很方便。
Let’s see a diagram of a Dependency-Track installation showing all its components:
让我们来看看依赖跟踪系统的安装示意图,其中显示了所有组件:
The required model parameters are:
所需的模型参数为
- Kubernetes namespace where the resources will be created
- A name used for the installation and to derive each component name
- The hostname to use with the Ingress resource
- Optional extra annotations to add to the Ingress. We need those as some cloud providers (AWS, for example) require them to work properly.
6. Controller Project Setup
6.控制器项目设置
The next step would be to define the CRD schema by hand, but since we’re using the Java Operator SDK, this will be taken care of. Instead, let’s move to the controller project itself.
下一步将是手工定义 CRD 模式,但由于我们使用的是 Java Operator SDK,因此这一步已经完成。相反,让我们转向控制器项目本身。
We’ll start with a standard Spring Boot 3 WebFlux application and add the required dependencies:
我们将从标准 Spring Boot 3 WebFlux 应用程序开始,并添加所需的依赖项:
<dependency>
<groupId>io.javaoperatorsdk</groupId>
<artifactId>operator-framework-spring-boot-starter</artifactId>
<version>5.4.0</version>
</dependency>
<dependency>
<groupId>io.javaoperatorsdk</groupId>
<artifactId>operator-framework-spring-boot-starter-test</artifactId>
<version>5.4.0</version>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j2-impl</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>io.fabric8</groupId>
<artifactId>crd-generator-apt</artifactId>
<version>6.9.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcprov-jdk18on</artifactId>
<version>1.77</version>
</dependency>
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcpkix-jdk18on</artifactId>
<version>1.77</version>
</dependency>
The latest version of these dependencies is available on Maven Central:
这些依赖项的最新版本可从 Maven Central 获取:
- operator-framework-spring-boot-starter
- operator-framework-spring-boot-starter-test
- crd-generator-apt
- bcprov-jdk18on
- bcpkix-jdk18on
The first two are required to implement and test the operator, respectively. crd-generator-apt is the annotation processor that generates the CRD definition from annotated classes. Finally, the bouncycastle libraries are required to support modern encryption standards.
前两者分别是实现和测试操作符所必需的。crd-generator-apt是注解处理器,用于从注解类中生成 CRD 定义。最后,需要使用 bouncycastle 库来支持现代加密标准。
Notice the exclusion added to the test starter. We’ve removed the log4j dependency because it conflicts with logback.
请注意添加到测试启动器中的排除项。我们删除了 log4j 依赖关系,因为它与 logback 冲突。
7. Implementing the Primary Resource
7.落实主要资源
A Primary Resource class represents a CRD that users will deploy into a cluster. It is identified using the @Group and @Version annotations so the CRD annotation processor can generate the appropriate CRD definition at compile-time:
主资源类表示用户将部署到群集中的 CRD。该类使用 @Group 和 @Version 注释进行标识,因此 CRD 注释处理器可在编译时生成相应的 CRD 定义:
@Group("com.baeldung")
@Version("v1")
public class DeptrackResource extends CustomResource<DeptrackSpec, DeptrackStatus> implements Namespaced {
@JsonIgnore
public String getFrontendServiceName() {
return this.getMetadata().getName() + "-" + DeptrackFrontendServiceResource.COMPONENT;
}
@JsonIgnore
public String getApiServerServiceName() {
return this.getMetadata().getName() + "-" + DeptrackApiServerServiceResource.COMPONENT;
}
}
Here, we leverage the SDK’s class CustomResource to implement our DeptrackResource. Besides the base class, we’re also using Namespaced, a marker interface that informs the annotation processor that our CRD instances will be deployed to a Kubernetes namespace.
在这里,我们利用 SDK 的 CustomResource 类来实现我们的 DeptrackResource 。除了基类,我们还使用了 Namespaced,这是一个标记接口,用于通知注释处理器我们的 CRD 实例将部署到 Kubernetes 命名空间。
We’ve added just two helper methods to the class, which we’ll later use to derive names for the frontend and API services. We need the @JsonIgnore annotation, in this case, to avoid issues when serializing/deserializing instances CRD instances in API calls to Kubernetes.
我们只为该类添加了两个辅助方法,稍后我们将用它们为前端和 API 服务取名。在这种情况下,我们需要使用 @JsonIgnore 注解,以避免在对 Kubernetes 的 API 调用中序列化/反序列化 CRD 实例时出现问题。
8. Specification and Status Classes
8.规格和状态类别
The CustomResource class requires two template parameters:
CustomResource 类需要两个模板参数:
- A specification class with the parameters supported by our model
- A status class with information about the dynamic state of our system
In our case, we have just a few parameters, so this specification is quite simple:
在我们的例子中,我们只有几个参数,因此规范非常简单:
public class DeptrackSpec {
private String apiServerImage = "dependencytrack/apiserver";
private String apiServerVersion = "";
private String frontendImage = "dependencytrack/frontend";
private String frontendVersion = "";
private String ingressHostname;
private Map<String, String> ingressAnnotations;
// ... getters/setters omitted
}
As for the status class, we’ll just extend ObservedGenerationAwareStatus:
至于状态类,我们只需扩展 ObservedGenerationAwareStatus 即可:
public class DeptrackStatus extends ObservedGenerationAwareStatus {
}
Using this approach, the SDK will automatically increment the observedGeneration status field on each update. This is a common practice used by controllers to track changes in a resource.
使用这种方法,SDK 将在每次更新时自动递增 observedGeneration 状态字段。这是控制器常用的做法,用于跟踪资源中的更改。
9. Reconciler
9.对账员
Next, we need to create a Reconciler class that is responsible for managing the overall state of the Dependency-Track system. Our class must implement this interface, which takes the resource class as a parameter:
接下来,我们需要创建一个Reconciler类,负责管理依赖关系跟踪系统的整体状态。我们的类必须实现该接口,该接口将资源类作为参数:
@ControllerConfiguration(dependents = {
@Dependent(name = DeptrackApiServerDeploymentResource.COMPONENT, type = DeptrackApiServerDeploymentResource.class),
@Dependent(name = DeptrackFrontendDeploymentResource.COMPONENT, type = DeptrackFrontendDeploymentResource.class),
@Dependent(name = DeptrackApiServerServiceResource.COMPONENT, type = DeptrackApiServerServiceResource.class),
@Dependent(name = DeptrackFrontendServiceResource.COMPONENT, type = DeptrackFrontendServiceResource.class),
@Dependent(type = DeptrackIngressResource.class)
})
@Component
public class DeptrackOperatorReconciler implements Reconciler<DeptrackResource> {
@Override
public UpdateControl<DeptrackResource> reconcile(DeptrackResource resource, Context<DeptrackResource> context) throws Exception {
return UpdateControl.noUpdate();
}
}
The key point here is the @ControllerConfiguration annotation. Its dependents property lists individual resources whose lifecycle will be linked to the primary resource.
这里的关键点是 @ControllerConfiguration 注解。其 dependents 属性列出了生命周期将与主资源相关联的各个资源。
For deployments and services, we need to specify a name property in addition to the resource’s type to distinguish them. As for the Ingress, there’s no need for a name since there’s just one for each deployed Dependency-Track resource.
对于部署和服务,除了资源的 type 之外,我们还需要指定 name 属性来区分它们。至于 Ingress,则不需要名称,因为每个部署的依赖关系跟踪资源都有一个名称。
Notice that we’ve also added a @Component annotation. We need this so the operator’s autoconfiguration logic detects the reconciler and adds it to its internal registry.
请注意,我们还添加了一个 @Component 注解。我们需要这样做,以便操作员的自动配置逻辑能够检测到调和器,并将其添加到内部注册表中。
10. Dependent Resource Classes
10.从属资源类别
For each resource that we want to create in the cluster as a result of a CRD deployment, we need to implement a KubernetesDependentResource class. These classes must be annotated with @KubernetesDependent and are responsible for managing the lifecycle of those resources in response to changes in the primary resource.
对于我们希望通过 CRD 部署在群集中创建的每个资源,我们都需要实现一个 KubernetesDependentResource 类。这些类必须使用 @KubernetesDependent 进行注解,并负责管理这些资源的生命周期,以响应主资源的变化。
The SDK provides the CRUDKubernetesDependentResource utility class that vastly simplifies this task. We just need to override the desired() method, which returns a description of the desired state for the dependent resource:
SDK 提供的 CRUDKubernetesDependentResource 实用程序类大大简化了这项任务。我们只需覆盖 desired() 方法,该方法会返回对依赖资源所需状态的描述:
@KubernetesDependent(resourceDiscriminator = DeptrackApiServerDeploymentResource.Discriminator.class)
public class DeptrackApiServerDeploymentResource extends CRUDKubernetesDependentResource<Deployment, DeptrackResource> {
public static final String COMPONENT = "api-server";
private Deployment template;
public DeptrackApiServerDeploymentResource() {
super(Deployment.class);
this.template = BuilderHelper.loadTemplate(Deployment.class, "templates/api-server-deployment.yaml");
}
@Override
protected Deployment desired(DeptrackResource primary, Context<DeptrackResource> context) {
ObjectMeta meta = fromPrimary(primary, COMPONENT)
.build();
return new DeploymentBuilder(template)
.withMetadata(meta)
.withSpec(buildSpec(primary, meta))
.build();
}
private DeploymentSpec buildSpec(DeptrackResource primary, ObjectMeta primaryMeta) {
return new DeploymentSpecBuilder()
.withSelector(buildSelector(primaryMeta.getLabels()))
.withReplicas(1)
.withTemplate(buildPodTemplate(primary,primaryMeta))
.build();
}
private LabelSelector buildSelector(Map<String, String> labels) {
return new LabelSelectorBuilder()
.addToMatchLabels(labels)
.build();
}
private PodTemplateSpec buildPodTemplate(DeptrackResource primary, ObjectMeta primaryMeta) {
return new PodTemplateSpecBuilder()
.withMetadata(primaryMeta)
.withSpec(buildPodSpec(primary))
.build();
}
private PodSpec buildPodSpec(DeptrackResource primary) {
String imageVersion = StringUtils.hasText(primary.getSpec().getApiServerVersion()) ?
":" + primary.getSpec().getApiServerVersion().trim() : "";
String imageName = StringUtils.hasText(primary.getSpec().getApiServerImage()) ?
primary.getSpec().getApiServerImage().trim() : Constants.DEFAULT_API_SERVER_IMAGE;
return new PodSpecBuilder(template.getSpec().getTemplate().getSpec())
.editContainer(0)
.withImage(imageName + imageVersion)
.and()
.build();
}
}
In this case, we create Deployment using the available builder classes. The data itself comes partly from metadata extracted from the primary resource passed to the method and from a template read at initialization time. This approach allows us to use existing deployments that are already battle-proven as a template and modify only what’s really needed.
在这种情况下,我们使用可用的构建器类创建 Deployment 。数据本身部分来自从传递给该方法的主资源中提取的元数据,部分来自初始化时读取的模板。这种方法允许我们使用已经经过实践验证的现有部署作为模板,只修改真正需要的部分。
Finally, we need to specify a Discriminator class, which the operator engine uses to target the right resource class when processing events from multiple sources of the same kind. Here, we’ll use an implementation based on the ResourceIDMatcherDiscriminator utility class available in the framework:
最后,我们需要指定一个 Discriminator 类,在处理来自多个同类来源的事件时,操作员引擎将使用该类来锁定正确的资源类。在此,我们将使用基于框架中可用的 ResourceIDMatcherDiscriminator 实用程序类的实现:
class Discriminator extends ResourceIDMatcherDiscriminator<Deployment, DeptrackResource> {
public Discriminator() {
super(COMPONENT, (p) -> new ResourceID(
p.getMetadata().getName() + "-" + COMPONENT,
p.getMetadata().getNamespace()));
}
}
The utility class requires an event source name and a mapping function. The latter takes a primary resource instance and returns the resource identifier (namespace + name) for the associated component.
实用程序类需要一个事件源名称和一个映射函数。后者接收主资源实例,并返回相关组件的资源标识符(命名空间 + 名称)。
Since all resource classes share the same basic structure, we won’t reproduce them here. Instead, we recommend checking the source code to see how each resource is built.
由于所有资源类都具有相同的基本结构,因此我们不会在此进行复制。相反,我们建议查看源代码,了解每个资源是如何构建的。
11. Local Testing
11.本地测试
Since the controller is just a regular Spring application, we can use regular test frameworks to create unit and integration tests for our application.
由于控制器只是一个普通的 Spring 应用程序,我们可以使用普通的测试框架为应用程序创建单元测试和集成测试。
The Java Operator SDK also offers a convenient mock Kubernetes implementation that helps with simple test cases. To use this mock implementation in test classes, we use the @EnableMockOperator together with the standard @SpringBootTest:
Java操作员SDK还提供了一个方便的模拟Kubernetes实现,有助于简单的测试用例。要在测试类中使用该模拟实现,我们将 @EnableMockOperator 与标准 @SpringBootTest 一起使用:
@SpringBootTest
@EnableMockOperator(crdPaths = "classpath:META-INF/fabric8/deptrackresources.com.baeldung-v1.yml")
class ApplicationUnitTest {
@Autowired
KubernetesClient client;
@Test
void whenContextLoaded_thenCrdRegistered() {
assertThat(
client
.apiextensions()
.v1()
.customResourceDefinitions()
.withName("deptrackresources.com.baeldung")
.get())
.isNotNull();
}
}
The crdPath property contains the location where the annotation processor creates the CRD definition YAML file. During test initialization, the mock Kubernetes service will automatically register it so we can create a CRD instance and check whether the expected resources are correctly created.
crdPath 属性包含注释处理器创建 CRD 定义 YAML 文件的位置。在测试初始化过程中,模拟 Kubernetes 服务会自动注册,这样我们就可以创建一个 CRD 实例,并检查是否正确创建了预期资源。
The SDK’s test infrastructure also configures a Kubernetes client that we can use to simulate deployments and check whether the expected resources are correctly created. Notice that there’s no need for a working Kubernetes cluster!
SDK 的测试基础架构还配置了一个 Kubernetes 客户端,我们可以用它来模拟部署并检查是否正确创建了预期资源。请注意,并不需要一个正常运行的 Kubernetes 集群!
12. Packaging and Deployment
12.包装和部署
To package our controller project, we can use a Dockerfile or, even better, Spring Boot’s build-image goal. We recommend the latter, as it ensures that the image follows recommended best practices regarding security and layer organization.
要打包控制器项目,我们可以使用 Dockerfile 或更好的 Spring Boot 的 build-image 目标。我们推荐使用后者,因为它可以确保映像遵循有关安全性和层组织的推荐最佳实践。
Once we’ve published the image to a local or remote registry, we must create a YAML manifest to deploy the controller into an existing cluster.
将映像发布到本地或远程注册表后,我们必须创建 YAML 清单,以便将控制器部署到现有群集中。
This manifest contains the deployment itself that manages the controller and supporting resources:
此清单包含管理控制器和支持资源的部署本身:
- The CRD definition
- A namespace where the controller will “live”
- A Cluster Role listing all APIs used by the controller
- A Service Account
- A Cluster Role Binding that links the role to the account
The resulting manifest is available in our GitHub repository.
结果清单可在我们的 GitHub 仓库中找到。
13. CRD Deployment Test
13.CRD 部署测试
To complete our tutorial, let’s create a simple Dependency-Track CRD manifest and deploy it. We’ll use a dedicated namespace (“test”) and expose it.
为了完成我们的教程,让我们创建一个简单的依赖关系跟踪 CRD 清单并进行部署。我们将使用专用命名空间(”test”)并将其公开。
For our test, we’re using a local Kubernetes that listens on IP address 172.31.42.16, so we’ll use deptrack.172.31.42.16.nip.io as the hostname. NIP.IO is a DNS service that resolves any hostname in the form *.1.2.3.4.nip.io to the IP address 1.2.3.4, so we don’t need to set up any DNS entry.
在我们的测试中,我们使用的是本地 Kubernetes,其监听 IP 地址为 172.31.42.16,因此我们将使用 deptrack.172.31.42.16.nip.io 作为主机名。NIP.IO 是一项 DNS 服务,可将任何形式为 *.1.2.3.4.nip.io 的主机名解析为 IP 地址 1.2.3.4,因此我们无需设置任何 DNS 条目。
Let’s have a look at the deployment manifest:
让我们来看看部署清单:
apiVersion: com.baeldung/v1
kind: DeptrackResource
metadata:
namespace: test
name: deptrack1
labels:
project: tutorials
annotations:
author: Baeldung
spec:
ingressHostname: deptrack.172.31.42.16.nip.io
Now, let’s deploy it with kubectl:
现在,让我们使用 kubectl 进行部署:
$ kubectl apply -f k8s/test-resource.yaml
deptrackresource.com.baeldung/deptrack1 created
We can get the controller logs to see that it reacted to the CRD creation and created the dependent resources:
我们可以从控制器日志中看到,它对 CRD 创建做出了反应,并创建了从属资源:
$ kubectl get --namespace test deployments
NAME READY UP-TO-DATE AVAILABLE AGE
deptrack1-api-server 0/1 1 0 62s
deptrack1-frontend 1/1 1 1 62s
$ kubectl get --namespace test services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deptrack1-frontend-service ClusterIP 10.43.122.76 <none> 8080/TCP 2m17s
$ kubectl get --namespace test ingresses
NAME CLASS HOSTS ADDRESS PORTS AGE
deptrack1-ingress traefik deptrack.172.31.42.16.nip.io 172.31.42.16 80 2m53s
As expected, the test namespace now has two deployments, two services, and an ingress. If we open a browser and point to https://deptrack.172.31.42.16.nip.io, we’ll see the application’s login page. This shows that the solution was correctly deployed.
不出所料,测试命名空间现在有两个部署、两个服务和一个入口。如果我们打开浏览器并指向 https://deptrack.172.31.42.16.nip.io,就会看到应用程序的登录页面。这表明解决方案已正确部署。
To complete the test, let’s remove the CRD:
为了完成测试,让我们移除 CRD:
$ kubectl delete --namespace test deptrackresource/deptrack1
deptrackresource.com.baeldung "deptrack1" deleted
Since Kubernetes knows which resources are linked to the CRD, they’ll also be deleted:
由于 Kubernetes 知道哪些资源链接到了 CRD,因此它们也会被删除:
$ kubectl get --namespace test deployments
No resources found in test namespace.
14. Conclusion
14.结论
In this tutorial, we’ve shown how to implement a basic Kubernetes Operator using the Java Operator SDK. Despite the amount of required boilerplate code, the implementation is straightforward.
在本教程中,我们展示了如何使用 Java Operator SDK 实现基本的 Kubernetes Operator。尽管需要大量模板代码,但实现起来非常简单。
Also, the SDK handles most of the heavy lifting of state reconciliation, leaving developers the task of defining the best way to handle complex deployments.
此外,SDK 还能处理大部分繁重的状态调节工作,让开发人员自行定义处理复杂部署的最佳方法。
As usual, all code is available over on GitHub.
与往常一样,所有代码均可在 GitHub 上获取。