1. Overview
1.概述
Resilience4j is a lightweight fault tolerance library that provides a variety of fault tolerance and stability patterns to a web application. In this tutorial, we’ll learn how to use this library with a simple Spring Boot application.
Resilience4j是一个轻量级的容错库,可为 Web 应用程序提供各种容错和稳定性模式。在本教程中,我们将学习如何在一个简单的Spring Boot应用程序中使用这个库。
2. Setup
2.设置
In this section, let’s focus on setting up critical aspects for our Spring Boot project.
在本节中,让我们重点讨论为我们的Spring Boot项目设置关键方面。
2.1. Maven Dependencies
2.1.Maven的依赖性
Firstly, we’ll need to add the spring-boot-starter-web dependency to bootstrap a simple web application:
首先,我们需要添加spring-boot-starter-web依赖项来引导一个简单的Web应用。
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<dependency>
Next, we’d need the resilience4j-spring-boot2 and spring-boot-starter-aop dependencies to use the features from the Resilience-4j library using annotations in our Spring Boot application:
接下来,我们需要resilience4j-spring-boot2和spring-boot-starter-aop/em>依赖,以便在我们的Spring Boot应用程序中使用Resilience-4j库的功能注释。
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-spring-boot2</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
Additionally, we’d also need to add the spring-boot-starter-actuator dependency to monitor the application’s current state through a set of exposed endpoints:
此外,我们还需要添加spring-boot-starter-actuator依赖项,以通过一组暴露的端点来监控应用程序的当前状态。
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
Lastly, let’s add the wiremock-jre8 dependency, as it’ll help us in testing our REST APIs using a mock HTTP server:
最后,让我们添加wiremock-jre8依赖,因为它将帮助我们使用mock HTTP服务器测试我们的REST APIs。
<dependency>
<groupId>com.github.tomakehurst</groupId>
<artifactId>wiremock-jre8</artifactId>
<scope>test</scope>
</dependency>
2.2. RestController and External API Caller
2.2.RestController和外部API调用者
While using different features of the Resilience4j library, our web application needs to interact with an external API. So, let’s go ahead and add a bean for the RestTemplate that will help us make API calls.
在使用Resilience4j库的不同功能时,我们的Web应用程序需要与外部API交互。因此,让我们继续为RestTemplate添加一个bean,这将帮助我们进行API调用。
@Bean
public RestTemplate restTemplate() {
return new RestTemplateBuilder().rootUri("http://localhost:9090")
.build();
}
Next, let’s define the ExternalAPICaller class as a Component and use the restTemplate bean as a member:
接下来,让我们将ExternalAPICaller类定义为Component,并使用restTemplatebean作为成员。
@Component
public class ExternalAPICaller {
private final RestTemplate restTemplate;
@Autowired
public ExternalAPICaller(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
}
}
After this, we can define the ResilientAppController class that exposes REST API endpoints and internally uses the ExternalAPICaller bean to call external API:
之后,我们可以定义ResilientAppController类,该类暴露了REST API端点,并在内部使用ExternalAPICallerbean来调用外部API。
@RestController
@RequestMapping("/api/")
public class ResilientAppController {
private final ExternalAPICaller externalAPICaller;
}
2.3. Actuator Endpoints
2.3.执行器端点
We can expose health endpoints via the Spring Boot actuator to know the exact state of the application at any given time.
我们可以通过Spring Boot执行器暴露健康端点,以了解应用程序在任何特定时间的确切状态。
So, let’s add the configuration to the application.properties file and enable the endpoints:
因此,让我们把配置添加到application.properties文件中,并启用端点。
management.endpoints.web.exposure.include=*
management.endpoint.health.show-details=always
management.health.circuitbreakers.enabled=true
management.health.ratelimiters.enabled=true
Additionally, as and when we need, we’ll add feature-specific configuration in the same application.properties file.
此外,当我们需要时,我们将在同一个application.properties文件中添加特定功能的配置。
2.4. Unit Test
2.4.单位测试
Our web application will call an external service in a real-world scenario. However, we can simulate the existence of such a running service by starting an external service using the WireMockExtension class.
我们的Web应用程序将在现实世界的场景中调用一个外部服务。然而,我们可以通过使用WireMockExtension类启动一个外部服务来模拟这种运行中的服务的存在。
So, let’s define EXTERNAL_SERVICE as a static member in the ResilientAppControllerUnitTest class:
因此,让我们把EXTERNAL_SERVICE定义为ResilientAppControllerUnitTest类中的一个静态成员。
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class ResilientAppControllerUnitTest {
@RegisterExtension
static WireMockExtension EXTERNAL_SERVICE = WireMockExtension.newInstance()
.options(WireMockConfiguration.wireMockConfig()
.port(9090))
.build();
Additionally, let’s add an instance of TestRestTemplate to call the APIs:
此外,让我们添加一个TestRestTemplate的实例来调用API。
@Autowired
private TestRestTemplate restTemplate;
2.5. Exception Handler
2.5.异常处理程序
The Resilience4j library would protect the service resources by throwing an exception depending on the fault tolerance pattern in context. However, these exceptions should translate to an HTTP response with a meaningful status code for the client.
Resilience4j库将根据上下文中的容错模式抛出一个异常,以保护服务资源。然而,这些异常应该转化为HTTP响应,为客户提供有意义的状态代码。
So, let’s define the ApiExceptionHandler class to hold handlers for different exceptions:
因此,让我们定义ApiExceptionHandler类,以容纳不同异常的处理程序。
@ControllerAdvice
public class ApiExceptionHandler {
}
We’ll add handlers in this class as we explore different fault tolerance patterns.
当我们探索不同的容错模式时,我们将在这个类中添加处理程序。
3. Circuit Breaker
3.断路器
The circuit breaker pattern protects a downstream service by restricting the upstream service from calling the downstream service during a partial or complete downtime.
circuit breaker 模式通过限制上游服务在部分或完全停机期间调用下游服务来保护下游服务。
Let’s start by exposing the /api/circuit-breaker endpoint and adding the @CircuitBreaker annotation:
让我们先公开/api/circuit-breaker端点并添加@CircuitBreaker注解。
@GetMapping("/circuit-breaker")
@CircuitBreaker(name = "CircuitBreakerService")
public String circuitBreakerApi() {
return externalAPICaller.callApi();
}
As required, we also need to define the callApi() method in the ExternalAPICaller class for calling an external endpoint /api/external:
根据要求,我们还需要在ExternalAPICaller类中定义callApi()方法,用于调用外部端点/api/external。
public String callApi() {
return restTemplate.getForObject("/api/external", String.class);
}
Next, let’s add the configuration for the circuit breaker in the application.properties file:
接下来,让我们在application.properties文件中添加断路器的配置。
resilience4j.circuitbreaker.instances.CircuitBreakerService.failure-rate-threshold=50
resilience4j.circuitbreaker.instances.CircuitBreakerService.minimum-number-of-calls=5
resilience4j.circuitbreaker.instances.CircuitBreakerService.automatic-transition-from-open-to-half-open-enabled=true
resilience4j.circuitbreaker.instances.CircuitBreakerService.wait-duration-in-open-state=5s
resilience4j.circuitbreaker.instances.CircuitBreakerService.permitted-number-of-calls-in-half-open-state=3
resilience4j.circuitbreaker.instances.CircuitBreakerService.sliding-window-size=10
resilience4j.circuitbreaker.instances.CircuitBreakerService.sliding-window-type=count_based
Essentially, the configuration would allow 50% of failed calls to the service in the closed state, after which it’ll open the circuit and start rejecting requests with the CallNotPermittedException. So, it’d be a good idea to add a handler for this exception in the ApiExceptionHandler class:
@ExceptionHandler({CallNotPermittedException.class})
@ResponseStatus(HttpStatus.SERVICE_UNAVAILABLE)
public void handleCallNotPermittedException() {
}
Finally, let’s test the /api/circuit-breaker API endpoint by simulating a scenario of downstream service downtime using EXTERNAL_SERVICE:
最后,让我们测试一下/api/circuit-breaker API端点,使用EXTERNAL_SERVICE:模拟下游服务停机的情景。
@Test
public void testCircuitBreaker() {
EXTERNAL_SERVICE.stubFor(WireMock.get("/api/external")
.willReturn(serverError()));
IntStream.rangeClosed(1, 5)
.forEach(i -> {
ResponseEntity response = restTemplate.getForEntity("/api/circuit-breaker", String.class);
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.INTERNAL_SERVER_ERROR);
});
IntStream.rangeClosed(1, 5)
.forEach(i -> {
ResponseEntity response = restTemplate.getForEntity("/api/circuit-breaker", String.class);
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.SERVICE_UNAVAILABLE);
});
EXTERNAL_SERVICE.verify(5, getRequestedFor(urlEqualTo("/api/external")));
}
We can notice that the first five calls failed as the downstream service was down. After that, the circuit switches to an open state. Hence, the subsequent five attempts are rejected with the 503 HTTP status code without actually calling the underlying API.
我们可以注意到,前五次呼叫失败了,因为下游的服务已经中断。之后,电路切换到开放状态。因此,随后的五次尝试都以503 HTTP状态代码被拒绝,而没有实际调用底层API。
4. Retry
4.重试
The retry pattern provides resiliency to a system by recovering from transient issues. Let’s start by adding the /api/retry API endpoint with @Retry annotation:
重试模式通过从瞬时问题中恢复来为系统提供弹性。让我们首先添加带有@Retry注解的/api/retry API端点。
@GetMapping("/retry")
@Retry(name = "retryApi", fallbackMethod = "fallbackAfterRetry")
public String retryApi() {
return externalAPICaller.callApi();
}
Further, we can optionally supply a fallback mechanism when all the retry attempts fail. In this case, we’ve provided the fallbackAfterRetry as a fallback method:
此外,我们可以在所有重试都失败时,选择性地提供一个回退机制。在这种情况下,我们提供了fallbackAfterRetry作为一个回退方法。
public String fallbackAfterRetry(Exception ex) {
return "all retries have exhausted";
}
Next, let’s update the application.properties file to add the configuration that’ll govern the behavior of retries:
接下来,让我们更新application.properties文件,添加将管理重试行为的配置。
resilience4j.retry.instances.retryApi.max-attempts=3
resilience4j.retry.instances.retryApi.wait-duration=1s
resilience4j.retry.metrics.legacy.enabled=true
resilience4j.retry.metrics.enabled=true
As such, we’re planning to retry for a maximum of 3 attempts, each with a delay of 1s.
因此,我们计划最多重试3次,每次延迟1s。
Finally, let’s test the retry behavior of the /api/retry API endpoint:
最后,让我们测试一下/api/retry API端点的重试行为。
@Test
public void testRetry() {
EXTERNAL_SERVICE.stubFor(WireMock.get("/api/external")
.willReturn(ok()));
ResponseEntity<String> response1 = restTemplate.getForEntity("/api/retry", String.class);
EXTERNAL_SERVICE.verify(1, getRequestedFor(urlEqualTo("/api/external")));
EXTERNAL_SERVICE.resetRequests();
EXTERNAL_SERVICE.stubFor(WireMock.get("/api/external")
.willReturn(serverError()));
ResponseEntity<String> response2 = restTemplate.getForEntity("/api/retry", String.class);
Assert.assertEquals(response2.getBody(), "all retries have exhausted");
EXTERNAL_SERVICE.verify(3, getRequestedFor(urlEqualTo("/api/external")));
}
We can notice that in the first scenario, there were no issues, so a single attempt was sufficient. On the other hand, when there was an issue, there were three attempts, after which the API responded via the fallback mechanism.
我们可以注意到,在第一种情况下,没有问题,所以一次尝试就足够了。另一方面,当出现问题时,有三次尝试,之后API通过回退机制作出回应。
5. Time Limiter
5.时间限制器
We can use the time limiter pattern to set a threshold timeout value for async calls made to external systems.
我们可以使用时间限制器模式来为向外部系统进行的异步调用设置阈值超时。
Let’s add the /api/time-limiter API endpoint that internally calls a slow API:
让我们添加/api/time-limiter API端点,在内部调用一个慢速API。
@GetMapping("/time-limiter")
@TimeLimiter(name = "timeLimiterApi")
public CompletableFuture<String> timeLimiterApi() {
return CompletableFuture.supplyAsync(externalAPICaller::callApiWithDelay);
}
Further, let’s simulate the delay in the external API call by adding a sleep time in the callApiWithDelay() method:
此外,让我们通过在callApiWithDelay()方法中添加一个睡眠时间来模拟外部API调用的延迟。
public String callApiWithDelay() {
String result = restTemplate.getForObject("/api/external", String.class);
try {
Thread.sleep(5000);
} catch (InterruptedException ignore) {
}
return result;
}
Next, we need to provide the configuration for the timeLimiterApi in the application.properties file:
接下来,我们需要在application.properties文件中为timeLimiterApi提供配置。
resilience4j.timelimiter.metrics.enabled=true
resilience4j.timelimiter.instances.timeLimiterApi.timeout-duration=2s
resilience4j.timelimiter.instances.timeLimiterApi.cancel-running-future=true
We can note that the threshold value is set to 2s. After which, the Resilience4j library internally cancels the async operation with a TimeoutException. So, let’s add a handler for this exception in the ApiExceptionHandler class to return an API response with the 408 HTTP status code:
我们可以注意到,阈值被设置为2s。之后,Resilience4j库在内部用一个TimeoutException取消了异步操作。因此,让我们在ApiExceptionHandler类中为这个异常添加一个处理程序,以返回一个带有408 HTTP状态代码的API响应。
@ExceptionHandler({TimeoutException.class})
@ResponseStatus(HttpStatus.REQUEST_TIMEOUT)
public void handleTimeoutException() {
}
Finally, let’s verify the configured time limiter pattern for the /api/time-limiter API endpoint:
最后,让我们验证一下/api/time-limiter API端点的配置的限时器模式。
@Test
public void testTimeLimiter() {
EXTERNAL_SERVICE.stubFor(WireMock.get("/api/external").willReturn(ok()));
ResponseEntity<String> response = restTemplate.getForEntity("/api/time-limiter", String.class);
assertThat(response.getStatusCode()).isEqualTo(HttpStatus.REQUEST_TIMEOUT);
EXTERNAL_SERVICE.verify(1, getRequestedFor(urlEqualTo("/api/external")));
}
As expected, since the downstream API call was set to take more than 5 seconds to complete, we witnessed a timeout for the API call.
正如预期的那样,由于下游的API调用被设定为需要5秒以上才能完成,我们目睹了API调用的超时。
6. Bulkhead
6.隔板
The bulkhead pattern limits the maximum number of concurrent calls to an external service.
bulkhead 模式限制了对外部服务的最大并发调用数量。。
Let’s start by adding the /api/bulkhead API endpoint with the @Bulkhead annotation:
让我们首先添加带有@Bulkhead注解的/api/bulkhead API端点。
@GetMapping("/bulkhead")
@Bulkhead(name="bulkheadApi")
public String bulkheadApi() {
return externalAPICaller.callApi();
}
Next, let’s define the configuration in the application.properties file to control the bulkhead functionality:
接下来,让我们在application.properties文件中定义配置,以控制隔板功能。
resilience4j.bulkhead.metrics.enabled=true
resilience4j.bulkhead.instances.bulkheadApi.max-concurrent-calls=3
resilience4j.bulkhead.instances.bulkheadApi.max-wait-duration=1
By this, we want to limit the maximum number of concurrent calls to 3, so each thread can wait only for 1ms if the bulkhead is full. After which, the requests be rejected with the BulkheadFullException exception. Further, we’d want to return a meaningful HTTP status code to the client, so let’s add an exception handler:
通过这种方式,我们希望将最大的并发调用数量限制在3,所以如果bulkhead满了,每个线程只能等待1ms。之后,这些请求将被拒绝,并出现BulkheadFullException异常。此外,我们希望向客户端返回一个有意义的HTTP状态代码,所以让我们添加一个异常处理程序。
@ExceptionHandler({ BulkheadFullException.class })
@ResponseStatus(HttpStatus.BANDWIDTH_LIMIT_EXCEEDED)
public void handleBulkheadFullException() {
}
Finally, let’s test the bulkhead behavior by calling five requests in parallel:
最后,让我们通过并行调用五个请求来测试隔板的行为。
@Test
public void testBulkhead() throws InterruptedException {
EXTERNAL_SERVICE.stubFor(WireMock.get("/api/external")
.willReturn(ok()));
Map<Integer, Integer> responseStatusCount = new ConcurrentHashMap<>();
IntStream.rangeClosed(1, 5)
.parallel()
.forEach(i -> {
ResponseEntity<String> response = restTemplate.getForEntity("/api/bulkhead", String.class);
int statusCode = response.getStatusCodeValue();
responseStatusCount.put(statusCode, responseStatusCount.getOrDefault(statusCode, 0) + 1);
});
assertEquals(2, responseStatusCount.keySet().size());
assertTrue(responseStatusCount.containsKey(BANDWIDTH_LIMIT_EXCEEDED.value()));
assertTrue(responseStatusCount.containsKey(OK.value()));
EXTERNAL_SERVICE.verify(3, getRequestedFor(urlEqualTo("/api/external")));
}
We notice that only three requests were successful, whereas the other requests were rejected with the BANDWIDTH_LIMIT_EXCEEDED HTTP status code.
我们注意到,只有三个请求是成功的,而其他请求则被拒绝,并出现了BANDWIDTH_LIMIT_EXCEEDEDHTTP状态代码。
7. Rate Limiter
7.速率限制器
The rate limiter pattern limits the rate of requests to a resource.
速率限制器模式限制对资源的请求速率。
Let’s start by adding the /api/rate-limiter API endpoint with the @RateLimiter annotation:
让我们首先添加带有@RateLimiter注解的/api/rate-limiter API端点。
@GetMapping("/rate-limiter")
@RateLimiter(name = "rateLimiterApi")
public String rateLimitApi() {
return externalAPICaller.callApi();
}
Next, let’s define the configuration for the rate limiter in the application.properties file:
接下来,让我们在application.properties文件中定义速率限制器的配置。
resilience4j.ratelimiter.metrics.enabled=true
resilience4j.ratelimiter.instances.rateLimiterApi.register-health-indicator=true
resilience4j.ratelimiter.instances.rateLimiterApi.limit-for-period=5
resilience4j.ratelimiter.instances.rateLimiterApi.limit-refresh-period=60s
resilience4j.ratelimiter.instances.rateLimiterApi.timeout-duration=0s
resilience4j.ratelimiter.instances.rateLimiterApi.allow-health-indicator-to-fail=true
resilience4j.ratelimiter.instances.rateLimiterApi.subscribe-for-events=true
resilience4j.ratelimiter.instances.rateLimiterApi.event-consumer-buffer-size=50
With this configuration, we want to limit the API calling rate to 5 req/min without waiting. After reaching the threshold for the allowed rate, requests will be rejected with the RequestNotPermitted exception. So, let’s define a handler in the ApiExceptionHandler class for translating it to a meaningful HTTP status response code:
通过这种配置,我们希望将API调用率限制在5 req/min,而无需等待。达到允许速率的阈值后,请求将被拒绝,出现RequestNotPermitted异常。因此,让我们在ApiExceptionHandler类中定义一个处理程序,将其转换为有意义的HTTP状态响应代码。
@ExceptionHandler({ RequestNotPermitted.class })
@ResponseStatus(HttpStatus.TOO_MANY_REQUESTS)
public void handleRequestNotPermitted() {
}
Lastly, let’s test our rate-limited API endpoint with 50 requests:
最后,让我们用50个请求来测试我们的限速API端点。
@Test
public void testRatelimiter() {
EXTERNAL_SERVICE.stubFor(WireMock.get("/api/external")
.willReturn(ok()));
Map<Integer, Integer> responseStatusCount = new ConcurrentHashMap<>();
IntStream.rangeClosed(1, 50)
.parallel()
.forEach(i -> {
ResponseEntity<String> response = restTemplate.getForEntity("/api/rate-limiter", String.class);
int statusCode = response.getStatusCodeValue();
responseStatusCount.put(statusCode, responseStatusCount.getOrDefault(statusCode, 0) + 1);
});
assertEquals(2, responseStatusCount.keySet().size());
assertTrue(responseStatusCount.containsKey(TOO_MANY_REQUESTS.value()));
assertTrue(responseStatusCount.containsKey(OK.value()));
EXTERNAL_SERVICE.verify(5, getRequestedFor(urlEqualTo("/api/external")));
}
As expected, only five requests were successful, whereas all the other requests failed with the TOO_MANY_REQUESTS HTTP status code.
正如预期的那样,只有五个请求是成功的,而其他所有请求都以TOO_MANY_REQUESTS HTTP状态码失败。
8. Actuator Endpoints
8.执行器端点
We’ve configured our application to support actuator endpoints for monitoring purposes. Using these endpoints, we can determine how the application behaves over time using one or more of the configured fault tolerance patterns.
我们已经将我们的应用程序配置为支持执行器端点,以达到监控目的。使用这些端点,我们可以确定应用程序如何使用一种或多种配置的容错模式随时间变化而表现。
Firstly, we can generally find all the exposed endpoints using a GET request to the /actuator endpoint:
首先,我们一般可以使用对/actuator端点的GET请求找到所有暴露的端点。
http://localhost:8080/actuator/
{
"_links" : {
"self" : {...},
"bulkheads" : {...},
"circuitbreakers" : {...},
"ratelimiters" : {...},
...
}
}
We can see a JSON response with fields like bulkheads, circuit breakers, ratelimiters, and so on. Each field provides us specific information depending on its association with a fault tolerance pattern.
我们可以看到带有bulkheads、circuit breakers、ratelimiters等字段的JSON响应。每个字段都根据其与容错模式的关联,为我们提供特定的信息。
Next, let’s take a look at the fields associated with the retry pattern:
接下来,让我们看一下与重试模式相关的字段。
"retries": {
"href": "http://localhost:8080/actuator/retries",
"templated": false
},
"retryevents": {
"href": "http://localhost:8080/actuator/retryevents",
"templated": false
},
"retryevents-name": {
"href": "http://localhost:8080/actuator/retryevents/{name}",
"templated": true
},
"retryevents-name-eventType": {
"href": "http://localhost:8080/actuator/retryevents/{name}/{eventType}",
"templated": true
}
Moving on, let’s inspect the application to see the list of retry instances:
继续,让我们检查一下应用程序,看看重试实例的列表。
http://localhost:8080/actuator/retries
{
"retries" : [ "retryApi" ]
}
As expected, we can see the retryApi instance in the list of configured retry instances.
正如所料,我们可以在配置的重试实例列表中看到retryApi实例。
Finally, let’s make a GET request to the /api/retry API endpoint through a browser and observe the retry events using the /actuator/retryevents endpoint:
最后,让我们通过浏览器向/api/retry API端点发出一个GET请求,使用/actuator/retryevents端点观察重试事件。
{
"retryEvents": [
{
"retryName": "retryApi",
"type": "RETRY",
"creationTime": "2022-10-16T10:46:31.950822+05:30[Asia/Kolkata]",
"errorMessage": "...",
"numberOfAttempts": 1
},
{
"retryName": "retryApi",
"type": "RETRY",
"creationTime": "2022-10-16T10:46:32.965661+05:30[Asia/Kolkata]",
"errorMessage": "...",
"numberOfAttempts": 2
},
{
"retryName": "retryApi",
"type": "ERROR",
"creationTime": "2022-10-16T10:46:33.978801+05:30[Asia/Kolkata]",
"errorMessage": "...",
"numberOfAttempts": 3
}
]
}
Since the downstream service is down, we can see three retry attempts with a wait time of 1s between any two attempts. It’s just like we configured it.
由于下游服务已经停机,我们可以看到三次重试,任何两次重试之间的等待时间为1s。这就像我们配置的那样。
9. Conclusion
9.结语
In this article, we learned about how to use the Resilience4j library in a Sprint Boot application. Additionally, we dived deep into several fault tolerance patterns such as circuit breaker, rate limiter, time limiter, bulkhead, and retry.
在这篇文章中,我们了解了如何在Sprint Boot应用程序中使用Resilience4j库。此外,我们深入研究了几种容错模式,如断路器、速率限制器、时间限制器、隔板和重试。
As always, the complete source code for the tutorial is available over on GitHub.
一如既往,该教程的完整源代码可在GitHub上获取。