学习笔记—微服务—技术栈实践(11)—分布式事务

分布式事务一致性

  在微服务架构中,系统被拆分为多个独立的服务,每个服务拥有自己的数据库。这种架构在带来灵活性和可扩展性的同时,也引入了新的问题,其中之一就是分布式事务。分布式事务是指一个业务操作需要跨多个服务和数据库才能完成,这时需要确保所有服务和数据库的操作要么全部成功,要么全部失败,以保持数据的一致性。

分布式事务所存在的挑战

  分布式系统中的事务管理要比传统单体应用复杂得多。

  首先,复杂网络和不可靠是一个问题。在分布式系统中,各个服务间通过网络通信,网络的不可靠性增加了事务的复杂性。

  第二,数据一致性是一个问题。各个服务可能有不同的数据源和操作,如何保证所有服务中的数据保持一致是分布式事务的关键问题。

  第三,隔离性又是一个问题。多个事务并发执行时,如何避免事务之间的冲突和数据不一致?

  最后,还有一个问题是可用性,如何在保证事务一致性的同时,不影响系统的高可用性?

分布式事务常见解决方案

  因此,分布式事务有多种常见的实现方式,包括:

两阶段提交协议(2PC)

  第一阶段:协调者向所有参与者发出“准备”请求,各个参与者执行本地事务并锁定相关资源,但不提交。

  第二阶段:如果所有参与者都准备好了,协调者发出“提交”请求,所有参与者提交本地事务;否则,发出“回滚”请求,所有参与者回滚本地事务。

  缺点:2PC 的问题在于性能开销大,参与者锁定资源的时间较长,且存在单点故障风险(协调者故障)。

三阶段提交协议(3PC)

  相比 2PC,3PC 引入了超时机制和中间阶段,进一步减少了单点故障问题,但依然有性能和复杂性问题。

TCC(Try-Confirm-Cancel)模式

  Try 阶段:尝试执行业务操作并预留必要的资源。

  Confirm 阶段:在所有服务的 Try 成功后,确认执行业务操作,真正提交事务。

  Cancel 阶段:如果 Try 阶段失败,则执行补偿操作,释放预留资源。

  优点:TCC 适合对事务有严格控制的业务场景,灵活性较高,但需要开发人员手动编写补偿逻辑。

  除此之外,还有如本地消息表保证最终一致性saga模式(把长事务分解为多个小事务,小事务执行后才会触发下一个小事务。如果某个事务失败,会调用相应的补偿事务来回滚已完成的事务。)等等。

Seata

  Seata 是阿里巴巴开源的一款分布式事务解决方案,它旨在解决微服务架构下的分布式事务问题。Seata 提供了一种简单、有效的分布式事务解决方案,主要包括 AT(Automatic Transaction)、TCC、Saga 和 XA 四种模式,覆盖了从简单到复杂的分布式事务场景。

Seata的架构

  Seata 的核心架构包括以下三个组件:

  TM(Transaction Manager,事务管理器):负责全局事务的开始、提交和回滚。

  RM(Resource Manager,资源管理器):负责管理本地事务和资源,与 TC 协调提交或回滚。

  TC(Transaction Coordinator,事务协调器):全局事务的协调者,维护事务状态,并驱动事务的提交或回滚。

  在 Seata 中,AT 模式是其核心模式之一。它将 2PC 的复杂性隐藏起来,开发者只需要编写普通的业务逻辑,Seata 自动将 SQL 的操作转化为两阶段提交流程。

Seata 的 四种 模式

  AT 模式类似于 2PC,它会自动生成回滚日志来支持分布式事务。适用于基于数据库的分布式事务场景,开发者不需要手动编写复杂的事务控制逻辑,系统会自动生成回滚日志并进行事务恢复。

  TCC 是一种灵活的分布式事务模式,开发者需要为每个操作定义 Try、Confirm 和 Cancel 方法,用于分别执行事务尝试、确认和回滚。提供了高灵活性,适用于复杂的业务逻辑,但需要开发者编写大量自定义代码。

  Seata 提供 Saga 模式的支持,适用于长事务场景,主要通过补偿机制来处理事务失败后的回滚。

  XA是一种分布式事务协议,由 X/Open 标准定义,Seata 通过支持 XA 模式实现数据库级别的分布式事务。该模式适用于需要强一致性的场景,但性能相对较差,资源锁定时间长。

Seata的部署

  首先,需要下载并启动seata,建议采用docker方式进行简单的部署。

1
2
docker pull seataio/seata-server
docker run -d --name seata-server -p 8091:8091 seataio/seata-server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
docker logs seata-server
███████╗███████╗ █████╗ ████████╗ █████╗
██╔════╝██╔════╝██╔══██╗╚══██╔══╝██╔══██╗
███████╗█████╗ ███████║ ██║ ███████║
╚════██║██╔══╝ ██╔══██║ ██║ ██╔══██║
███████║███████╗██║ ██║ ██║ ██║ ██║
╚══════╝╚══════╝╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝


21:18:36.340 INFO --- [ main] [ta.config.ConfigurationFactory] [ load] [] : load Configuration from :Spring Configuration
21:18:36.355 INFO --- [ main] [ta.config.ConfigurationFactory] [ buildConfiguration] [] : load Configuration from :Spring Configuration
21:18:36.381 INFO --- [ main] [seata.server.ServerApplication] [ logStarting] [] : Starting ServerApplication using Java 1.8.0_342 on 7583cdde42ef with PID 1 (/seata-server/classes started by root in /seata-server)
21:18:36.382 INFO --- [ main] [seata.server.ServerApplication] [ogStartupProfileInfo] [] : No active profile set, falling back to 1 default profile: "default"
21:18:38.043 INFO --- [ main] [mbedded.tomcat.TomcatWebServer] [ initialize] [] : Tomcat initialized with port(s): 7091 (http)
21:18:38.088 INFO --- [ main] [oyote.http11.Http11NioProtocol] [ log] [] : Initializing ProtocolHandler ["http-nio-7091"]
21:18:38.089 INFO --- [ main] [.catalina.core.StandardService] [ log] [] : Starting service [Tomcat]
21:18:38.089 INFO --- [ main] [e.catalina.core.StandardEngine] [ log] [] : Starting Servlet engine: [Apache Tomcat/9.0.62]
21:18:38.233 INFO --- [ main] [rBase.[Tomcat].[localhost].[/]] [ log] [] : Initializing Spring embedded WebApplicationContext
21:18:38.233 INFO --- [ main] [letWebServerApplicationContext] [ebApplicationContext] [] : Root WebApplicationContext: initialization completed in 1792 ms
21:18:39.043 INFO --- [ main] [vlet.WelcomePageHandlerMapping] [ <init>] [] : Adding welcome page: class path resource [static/index.html]
21:18:39.472 INFO --- [ main] [oyote.http11.Http11NioProtocol] [ log] [] : Starting ProtocolHandler ["http-nio-7091"]
21:18:39.502 INFO --- [ main] [mbedded.tomcat.TomcatWebServer] [ start] [] : Tomcat started on port(s): 7091 (http) with context path ''
21:18:39.511 INFO --- [ main] [seata.server.ServerApplication] [ logStarted] [] : Started ServerApplication in 3.941 seconds (JVM running for 4.541)
21:18:39.846 INFO --- [ main] [a.server.session.SessionHolder] [ init] [] : use session store mode: file
21:18:39.868 INFO --- [ main] [rver.lock.LockerManagerFactory] [ init] [] : use lock store mode: file
21:18:39.988 INFO --- [ main] [rpc.netty.NettyServerBootstrap] [ start] [] : Server started, service listen port: 8091
21:18:40.013 INFO --- [ main] [io.seata.server.ServerRunner ] [ run] [] :
you can visit seata console UI on http://127.0.0.1:7091.
log path: /root/logs/seata.
21:18:40.013 INFO --- [ main] [io.seata.server.ServerRunner ] [ run] [] : seata server started in 500 millSeconds
OpenJDK 64-Bit Server VM warning: Cannot open file /root/logs/seata/seata_gc.log due to No such file or directory

  接下来,对其进行配置:

1
2
3
4
5
6
# 创建配置文件目录
mkdir -p /home/docker_home/seata/seata-data
# 将容器内的默认配置文件拷贝出来
docker cp seata-server:/seata-server/resources /home/docker_home/seata/seata-data
# 删除容器
docker rm -f seata-server

  而后,考虑到seata作为阿里巴巴开源的分布式解决方案,考虑直接将seata注册并配置到nacos上,首先按先下载官方的config.txt:https://github.com/apache/incubator-seata/tree/develop/script/config-center

  这里,采用MySQL方式进行配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#Transaction storage configuration, only for the server. The file, db, and redis configuration values are optional.
store.mode=db
store.lock.mode=db
store.session.mode=db
#Used for password encryption
store.publicKey=

#These configurations are required if the `store mode` is `db`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `db`, you can remove the configuration block.
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true
store.db.user=root
store.db.password=root
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.distributedLockTable=distributed_lock
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000

  将上述配置文件config.txt导入nacos,命名seata-server.properties。配置格式类型选择为properties。

  在上面cp出来的文件中,找到resources/application.yml根据nacos的实际配置信息进行修改。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
seata:
config:
# support: nacos, consul, apollo, zk, etcd3
type: nacos
nacos:
server-addr: 192.168.186.1:8848
username: nacos
password: nacos
namespace: 3bc00f76-05de-4fa5-bdbe-06c57ad5c31c
data-id: seata-server.properties
registry:
# support: nacos, eureka, redis, zk, consul, etcd3, sofa
type: nacos
nacos:
application: seata-server
server-addr: 192.168.186.1:8848
username: nacos
password: nacos
namespace: 3bc00f76-05de-4fa5-bdbe-06c57ad5c31c

  而后去往https://github.com/apache/incubator-seata/tree/develop/script/server/db

  下载mysql所需要的数据表。创建seata数据库,并导入sql

1
2
3
4
5
6
7
8
9
+------------------+
| Tables_in_seata |
+------------------+
| branch_table |
| distributed_lock |
| global_table |
| lock_table |
+------------------+
4 rows in set (0.00 sec)

  global_table:全局事务表,每当有一个全局事务发起后,就会在该表中记录全局事务的ID

  branch_table:分支事务表,记录每一个分支事务的ID,分支事务操作的哪个数据库等信息

  lock_table:全局锁

  distributed_lock:分布式锁

  而后,重启seata:

1
docker run --name seata-server -d -p 8091:8091 -p 7091:7091 -v /home/docker_home/seata/seata-data/resources:/seata-server/resources  seataio/seata-server

  打开seata界面localhost:7091

seata界面
  输入seata seata的用户名密码进入。由此,seata基于nacos和mysql的配置方式便部署完成了。另外,需要格外注意得是,每个参与分布式事务的数据库都需要加一张undo_log表。该表用于在分布式事务发生异常时执行回滚的依据

1
2
3
4
5
6
7
8
9
10
11
12
13
CREATE TABLE `undo_log` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`branch_id` bigint(20) NOT NULL,
`xid` varchar(100) NOT NULL,
`context` varchar(128) NOT NULL,
`rollback_info` longblob NOT NULL,
`log_status` int(11) NOT NULL,
`log_created` datetime NOT NULL,
`log_modified` datetime NOT NULL,
`ext` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

seata的使用

  seata配置完成后,接下来以AT模式和TCC模式为例分别展示seata对于分布式事务的处理。

示例的构造

  于此,采用一个示例。即商品扣库存+用户账户扣款

  首先,先构造三张数据表,分别是product、account、order,即商品表、账户表和订单表。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
CREATE TABLE product (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(50),
stock INT
);

CREATE TABLE account (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
user_id BIGINT,
balance DECIMAL(10, 2)
);

CREATE TABLE orders (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
user_id BIGINT,
product_id BIGINT,
status VARCHAR(20)
);

  在此基础上,设计实现三个简单的服务,即ProductService,用于扣减商品库存;AccountService,用于扣减用户账户余额;OrderService,用于订单并处理整个事务。

  参见代码https://github.com/gagaducko/learning_demos/tree/main/seata-demo

  其中,对于ProductService来说,扣减库存如下:

1
2
3
4
5
6
7
public void reduceStock(Long productId, int amount) {
String sql = "UPDATE product SET stock = stock - ? WHERE id = ? AND stock >= ?";
int updatedRows = jdbcTemplate.update(sql, amount, productId, amount);
if (updatedRows == 0) {
throw new RuntimeException("库存不足");
}
}

  对于AccountService来说,扣减账户余额如下:

1
2
3
4
5
6
7
public void debit(Long userId, BigDecimal amount) {
String sql = "UPDATE account SET balance = balance - ? WHERE user_id = ? AND balance >= ?";
int updatedRows = jdbcTemplate.update(sql, amount, userId, amount);
if (updatedRows == 0) {
throw new RuntimeException("余额不足");
}
}

  对于OrderService来说,下订单的过程如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
public class OrderService {

@Autowired
private ProductService productService;

@Autowired
private AccountService accountService;

@Autowired
private JdbcTemplate jdbcTemplate;

@GlobalTransactional
public void createOrder(Long userId, Long productId, int count, BigDecimal price) {
log.info("Creating order for userId={}, productId={}, count={}, price={}", userId, productId, count, price);

// 扣减库存
try {
productService.reduceStock(productId, count);
} catch (Exception e) {
throw new RuntimeException("库存扣减失败,回滚事务:", e);
}

// 扣减账户余额
BigDecimal totalPrice = price.multiply(BigDecimal.valueOf(count));

try {
accountService.debit(userId, totalPrice);
} catch (Exception e) {
throw new RuntimeException("账户余额扣减失败,回滚事务:", e);
}

// 模拟异常,事务应回滚
if (totalPrice.compareTo(BigDecimal.ZERO) < 0) {
throw new RuntimeException("非法金额,事务回滚");
}

// 创建订单
String sql = "INSERT INTO orders (user_id, product_id, status) VALUES (?, ?, ?)";
jdbcTemplate.update(sql, userId, productId, "CREATED");
}
}

  可以看到的是,在下订单的时候,会调用扣减库存和扣减账户余额这么两个函数,并且向订单表中创建订单。

  在这样一个操作中,很显然,要么操作全部成功,要么操作全部失败,这十分重要,否则,在不加事务的情况下,可以看到的是,如果账户余额不足,那么成功扣减了库存但是余额没能减去,就会导致库存和订单不匹配,后续的使用上就会出现逻辑或者数据上的混乱,反之亦然。

  由于在微服务环境下,服务是分布式的,本地事务Transaction的使用难以满足这样一个事务的要求,因此,seata的重要性就体现出来了。

seata的AT模式

  使用AT模式,是一种两阶段提交协议的演变。首先,AT模式会发起一个全局事务,每个参与者在该全局事务内执行本地事务,而每个参与者在全局事务下也会注册自己的分支事务,Seata会在后台为这些分支事务建锁来确保数据的一致性。对于每个分支事务的执行过程中,seata会记录其状态及相关的SQL操作。如果所有的分支事务都成功,那么seata就会向每个参与者发送提交请求,最终将所有更改持久化到数据库中,反之,如果有任何分支事务失败,seata就会向所有参与者发送回滚请求,撤销已经提交了的操作。

  AT模式是默认的,简单的配置便可以使用,缺点在于,在某些情况下,AT模式对事务的隔离性要求比较高,可能会影响到系统的并发性能,此外,采用AT模式,需要保证底层数据库支持Undo的操作,也就是在回滚的时候可以撤销已经提交了的操作,这对于数据库来说是有一定要求的,主要是适用于关系型数据库而不适用于部分非关系型的数据库。此外,AT模式可能无法满足某些复杂业务场景的需求,需要其他的一些分布式事务解决方案。

  以下为使用AT模式的一个实例

  对于这三个服务来说,均需添加seata的依赖:

1
2
3
4
5
6
<!-- 选择适合的版本 -->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
<version>2022.0.0.0-RC2</version>
</dependency>

  在此基础上,再properties文件中加入对应设置:

1
2
3
4
5
6
7
8
9
seata.tx-service-group=my_tx_group
seata.enabled=true
seata.service.vgroup-mapping.my_tx_group=default
seata.service.disable-global-transaction=false
seata.client.rm.async-commit-buffer-limit=10000
seata.client.rm.lock.retry-policy-branch-rollback-on-conflict=true
seata.client.tm.commit-retry-count=5
seata.client.tm.rollback-retry-count=5
seata.service.grouplist.default=127.0.0.1:8091

  在这样的一个配置中:

  • seata.tx-service-group=my_tx_group
    • 指定全局事务的服务组名称。在 Seata 中,事务服务组用于标识一组参与相同全局事务的微服务。
    • 要确保所有相关的服务都使用相同的服务组名称,以便正确协调事务。
    • 如果不使用相同的,那么可以看到的是在使用的过程中,每个服务都有各自的xid,就不能够保证事务的一致性。
  • seata.enabled=true
    • 启用 Seata 客户端。设置为 true 表示启用 Seata 功能,允许应用程序使用分布式事务。
  • seata.service.vgroup-mapping.my_tx_group=default
    • 将虚拟组映射到实际的事务服务组。这里将 my_tx_group 映射到 default 组。
    • default 组通常是 Seata 默认的资源管理器,需要确保在seata前面的资源管理器配置中存在该组。
  • seata.service.disable-global-transaction=false
    • 这个配置项控制是否禁用全局事务。设置为 false 表示启用全局事务。
  • seata.client.rm.async-commit-buffer-limit=10000
    • 设置资源管理器(Resource Manager)异步提交的缓冲区限制。这个参数控制在异步提交时,可以暂存的最大事务数,过高可能会导致内存消耗增加,过低可能会影响性能。
    • 根据应用负载,可以适当调整这个值。
  • seata.client.rm.lock.retry-policy-branch-rollback-on-conflict=true
    • 配置资源管理器在分支事务冲突时是否自动回滚。设置为 true 意味着在分支事务冲突时,Seata 会尝试自动回滚这个分支事务。
    • 这是一个很重要的配置,可以有效减少由于锁竞争引起的事务失败。
  • seata.client.tm.commit-retry-count=5
    • 设定事务管理器在提交事务时的重试次数。如果提交失败,事务管理器会重新尝试提交事务,直到达到重试次数。
    • 根据业务需求,这个值可以调整,以确保在网络波动或其他问题时,事务能够尽量成功提交。
  • seata.client.tm.rollback-retry-count=5
    • 设置事务管理器在回滚事务时的重试次数。与提交重试类似,确保在回滚失败的情况下,能够重试进行回滚操作。
    • 同样,这个值可以根据需要进行调整。
  • seata.service.grouplist.default=127.0.0.1:8091
    • 指定 Seata Server 的地址和端口。这里设置为 127.0.0.1:8091,表示 Seata Server 运行在本地的 8091 端口。

  在配置的过程中,需要注意网络的配置等是正确的,比如seata.service.grouplist.default 中配置的 Seata Server 地址可以被应用访问,尤其在使用 Docker 等虚拟化环境时,IP 地址和网络配置可能需要调整。

  另外,在实际业务中,对于async-commit-buffer-limit 和重试次数配置需要进行测试,来找到性能和可靠性的平衡点。

  另外,在使用RestTemplate的时候,为了确保事务id的上下游传递,需要做如下的一个设置:

1
2
3
4
5
6
7
8
@Bean
public RestTemplate restTemplate() {
RestTemplate restTemplate = new RestTemplate();
List<ClientHttpRequestInterceptor> interceptors = restTemplate.getInterceptors();
interceptors.add(new SeataRestTemplateInterceptor());
restTemplate.setInterceptors(interceptors);
return restTemplate;
}

  配置一个Seata的拦截器,SeataRestTemplateInterceptor,以便在进行远程调用时支持分布式事务。

  在此基础上,还需要确保涉及事务的函数在出现错误的时候是抛出了异常的,比如:

1
2
3
4
5
try{
……
} catch(Exception e){
throw new RuntimeException(e);
}

  如果说没有抛出异常,而是直接处理了,那么seata可能就会认为这个问题是已经被处理了而不是有异常需要回滚的,便会影响出现问题的情况下,事务回滚。比如:

1
e.printStackTrace();

  配置好后,向业务服务层的函数上,增加**@GlobalTransactional的注解**,确保所有相关服务(包括 AccountService、OrderService、ProductService 等)都加入同一个全局事务。

1
@GlobalTransactional

  这样,一个依靠seata实现分布式事务一致性的一个简单的示例就完成了。

  做一下验证,首先,先设置数据库中account的数据如下:

1
2
3
4
5
|----|------|--------|
| id | user_id | balance |
|----|------|--------|
| 1 | 1 | 1000 |
|----|------|--------|

  设置数据库中product的数据如下:

1
2
3
4
5
|----|------|--------|
| id | name | stock |
|----|------|--------|
| 1 | 小鸭玩具 | 100 |
|----|------|--------|

  调用该接口http://localhost:8082/create?userId=1&productId=1&count=2&price=100

  连续调用五次,可以得到的是,balance变成0了,然后小鸭玩具变成90了,再调用一次,采用debug的模式打断点debug,就可以看见如下情况:

  运行到扣减库存后,正常运行完,因为库存是够的,这时候库存变成了88。

  而运行到扣减账户余额的时候,便会进入抛出错误的情况,因为账户余额不够了。

  具体的日志信息如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
// OrderService
2024-10-10T17:27:54.717+08:00 INFO 26256 --- [nio-8082-exec-1] io.seata.tm.TransactionManagerHolder : TransactionManager Singleton io.seata.tm.DefaultTransactionManager@282e8128
2024-10-10T17:27:54.726+08:00 INFO 26256 --- [nio-8082-exec-1] i.seata.tm.api.DefaultGlobalTransaction : Begin new global transaction [172.17.0.6:8091:54617088302346381]
2024-10-10T17:28:16.294+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : channel [id: 0xccd802ae, L:/127.0.0.1:60623 - R:/127.0.0.1:8091] read idle.
2024-10-10T17:28:16.294+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : channel [id: 0x7508a115, L:/127.0.0.1:60620 - R:/127.0.0.1:8091] read idle.
2024-10-10T17:28:16.295+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : will destroy channel:[id: 0x7508a115, L:/127.0.0.1:60620 - R:/127.0.0.1:8091]
2024-10-10T17:28:16.296+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : will destroy channel:[id: 0xccd802ae, L:/127.0.0.1:60623 - R:/127.0.0.1:8091]
2024-10-10T17:28:16.297+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0x7508a115, L:/127.0.0.1:60620 - R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.297+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0xccd802ae, L:/127.0.0.1:60623 - R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.300+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.300+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.301+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.c.r.netty.NettyClientChannelManager : return to pool, rm channel:[id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]
2024-10-10T17:28:16.301+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.netty.NettyClientChannelManager : return to pool, rm channel:[id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]
2024-10-10T17:28:16.301+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : channel valid false,channel:[id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]
2024-10-10T17:28:16.301+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : channel valid false,channel:[id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]
2024-10-10T17:28:16.301+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : will destroy channel:[id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]
2024-10-10T17:28:16.301+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.301+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : will destroy channel:[id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]
2024-10-10T17:28:16.301+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.300+08:00 INFO 26256 --- [nio-8082-exec-1] d.g.orderservice.service.OrderService : Creating order for userId=1, productId=1, count=2, price=100
2024-10-10T17:28:16.301+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.302+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : channel inactive: [id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]
2024-10-10T17:28:16.302+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.302+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : channel valid false,channel:[id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]
2024-10-10T17:28:16.302+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : will destroy channel:[id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]
2024-10-10T17:28:16.302+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.302+08:00 INFO 26256 --- [ctor_RMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0xccd802ae, L:/127.0.0.1:60623 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:16.302+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : channel inactive: [id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]
2024-10-10T17:28:30.326+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : channel valid false,channel:[id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]
2024-10-10T17:28:30.326+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory : will destroy channel:[id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]
2024-10-10T17:28:30.327+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:30.327+08:00 INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient : ChannelHandlerContext(AbstractNettyRemotingClient$ClientHandler#0, [id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]) will closed
2024-10-10T17:28:30.338+08:00 INFO 26256 --- [or-localhost-13] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Receive server push request, request = ClientDetectionRequest, requestId = 2
2024-10-10T17:28:30.338+08:00 INFO 26256 --- [or-localhost-13] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Ack server push request, request = ClientDetectionRequest, requestId = 2
2024-10-10T17:28:30.341+08:00 ERROR 26256 --- [or-localhost-13] c.a.n.c.remote.client.grpc.GrpcClient : [1728552462212_127.0.0.1_60634]Request stream onCompleted, switch server
2024-10-10T17:28:30.348+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Server healthy check fail, currentConnection = 1728552462212_127.0.0.1_60634
2024-10-10T17:28:30.348+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Try to reconnect to a new server, server is not appointed, will choose a random server.
2024-10-10T17:28:30.348+08:00 INFO 26256 --- [t.remote.worker] c.a.n.c.remote.client.grpc.GrpcClient : grpc client connection server:localhost ip,serverPort:9848,grpcTslConfig:{"sslProvider":"OPENSSL","enableTls":false,"mutualAuthEnable":false,"trustAll":false}
2024-10-10T17:28:31.172+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Success to connect a server [localhost:8848], connectionId = 1728552510355_127.0.0.1_60700
2024-10-10T17:28:31.173+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Abandon prev connection, server is localhost:8848, connectionId is 1728552462212_127.0.0.1_60634
2024-10-10T17:28:31.174+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : Close current connection 1728552462212_127.0.0.1_60634
2024-10-10T17:28:33.891+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Notify disconnected event to listeners
2024-10-10T17:28:33.891+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Try to reconnect to a new server, server is not appointed, will choose a random server.
2024-10-10T17:28:33.891+08:00 WARN 26256 --- [t.remote.worker] com.alibaba.nacos.client.naming : Grpc connection disconnect, mark to redo
2024-10-10T17:28:33.891+08:00 INFO 26256 --- [t.remote.worker] c.a.n.c.remote.client.grpc.GrpcClient : grpc client connection server:localhost ip,serverPort:9848,grpcTslConfig:{"sslProvider":"OPENSSL","enableTls":false,"mutualAuthEnable":false,"trustAll":false}
2024-10-10T17:28:33.893+08:00 WARN 26256 --- [t.remote.worker] com.alibaba.nacos.client.naming : mark to redo completed
2024-10-10T17:28:33.894+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Notify connected event to listeners.
2024-10-10T17:28:33.894+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.client.naming : Grpc connection connect
2024-10-10T17:29:20.425+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Success to connect a server [localhost:8848], connectionId = 1728552513905_127.0.0.1_60710
2024-10-10T17:29:20.425+08:00 WARN 26256 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Thread starvation or clock leap detected (housekeeper delta=1m4s131ms426µs700ns).
2024-10-10T17:29:20.426+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Abandon prev connection, server is localhost:8848, connectionId is 1728552510355_127.0.0.1_60700
2024-10-10T17:29:20.426+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : Close current connection 1728552510355_127.0.0.1_60700
2024-10-10T17:29:20.426+08:00 INFO 26256 --- [ing.grpc.redo.0] com.alibaba.nacos.client.naming : Redo instance operation REGISTER for DEFAULT_GROUP@@OrderService
2024-10-10T17:29:20.427+08:00 INFO 26256 --- [eoutChecker_2_1] i.s.c.r.netty.NettyClientChannelManager : will connect to 127.0.0.1:8091
2024-10-10T17:29:20.428+08:00 INFO 26256 --- [eoutChecker_1_1] i.s.c.r.netty.NettyClientChannelManager : will connect to 127.0.0.1:8091
2024-10-10T17:29:20.429+08:00 INFO 26256 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient : RM will register :jdbc:mysql://localhost:3306/seatademo
2024-10-10T17:29:20.430+08:00 INFO 26256 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='OrderService', transactionServiceGroup='my_tx_group'} >
2024-10-10T17:29:20.431+08:00 INFO 26256 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:RMROLE,address:127.0.0.1:8091,msg:< RegisterRMRequest{resourceIds='jdbc:mysql://localhost:3306/seatademo', applicationId='OrderService', transactionServiceGroup='my_tx_group'} >
2024-10-10T17:29:20.441+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Notify disconnected event to listeners
2024-10-10T17:29:22.765+08:00 INFO 26256 --- [or-localhost-31] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Receive server push request, request = ClientDetectionRequest, requestId = 3
2024-10-10T17:29:22.765+08:00 INFO 26256 --- [or-localhost-31] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Ack server push request, request = ClientDetectionRequest, requestId = 3
2024-10-10T17:29:22.764+08:00 INFO 26256 --- [or-localhost-30] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Receive server push request, request = ClientDetectionRequest, requestId = 4
2024-10-10T17:29:22.765+08:00 INFO 26256 --- [or-localhost-30] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Ack server push request, request = ClientDetectionRequest, requestId = 4
2024-10-10T17:29:22.765+08:00 WARN 26256 --- [t.remote.worker] com.alibaba.nacos.client.naming : Grpc connection disconnect, mark to redo
2024-10-10T17:29:22.765+08:00 WARN 26256 --- [t.remote.worker] com.alibaba.nacos.client.naming : mark to redo completed
2024-10-10T17:29:22.765+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Notify connected event to listeners.
2024-10-10T17:29:22.765+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.client.naming : Grpc connection connect
2024-10-10T17:29:22.765+08:00 INFO 26256 --- [eoutChecker_1_1] i.s.c.rpc.netty.TmNettyRemotingClient : register TM success. client version:1.7.0-native-rc2, server version:1.8.0,channel:[id: 0x6c4ad3a9, L:/127.0.0.1:60751 - R:/127.0.0.1:8091]
2024-10-10T17:29:22.765+08:00 INFO 26256 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient : register RM success. client version:1.7.0-native-rc2, server version:1.8.0,channel:[id: 0x05e526e6, L:/127.0.0.1:60750 - R:/127.0.0.1:8091]
2024-10-10T17:29:22.765+08:00 INFO 26256 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 2330 ms, version:1.8.0,role:TMROLE,channel:[id: 0x6c4ad3a9, L:/127.0.0.1:60751 - R:/127.0.0.1:8091]
2024-10-10T17:29:22.765+08:00 INFO 26256 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 2330 ms, version:1.8.0,role:RMROLE,channel:[id: 0x05e526e6, L:/127.0.0.1:60750 - R:/127.0.0.1:8091]
2024-10-10T17:29:22.767+08:00 ERROR 26256 --- [or-localhost-30] c.a.n.c.remote.client.grpc.GrpcClient : [1728552513905_127.0.0.1_60710]Request stream onCompleted, switch server
2024-10-10T17:29:22.767+08:00 INFO 26256 --- [or-localhost-31] c.a.n.c.remote.client.grpc.GrpcClient : [1728552510355_127.0.0.1_60700]Ignore complete event,isRunning:true,isAbandon=true
2024-10-10T17:29:22.768+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Try to reconnect to a new server, server is not appointed, will choose a random server.
2024-10-10T17:29:22.768+08:00 INFO 26256 --- [t.remote.worker] c.a.n.c.remote.client.grpc.GrpcClient : grpc client connection server:localhost ip,serverPort:9848,grpcTslConfig:{"sslProvider":"OPENSSL","enableTls":false,"mutualAuthEnable":false,"trustAll":false}
2024-10-10T17:29:22.774+08:00 ERROR 26256 --- [ing.grpc.redo.0] com.alibaba.nacos.common.remote.client : Send request fail, request = InstanceRequest{headers={accessToken=eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJuYWNvcyIsImV4cCI6MTcyODU3MDQ2MX0.qcV9lq74W9i45sPBsU2P58n_TvF9qICzC5XS2gDKSJjXN5u8tlTTtEumrM-NvuxI, app=unknown}, requestId='null'}, retryTimes = 0, errorMessage = java.util.concurrent.ExecutionException: com.alibaba.nacos.shaded.io.grpc.StatusRuntimeException: UNAVAILABLE: Channel shutdown invoked
2024-10-10T17:29:22.783+08:00 INFO 26256 --- [nio-8082-exec-1] i.seata.tm.api.DefaultGlobalTransaction : Suspending current transaction, xid = 172.17.0.6:8091:54617088302346381
2024-10-10T17:29:22.785+08:00 INFO 26256 --- [nio-8082-exec-1] i.seata.tm.api.DefaultGlobalTransaction : [172.17.0.6:8091:54617088302346381] rollback status: Finished
2024-10-10T17:29:22.802+08:00 ERROR 26256 --- [nio-8082-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: java.lang.RuntimeException: 账户余额扣减失败,回滚事务:] with root cause

org.springframework.web.client.HttpServerErrorException$InternalServerError: 500 : "{"timestamp":"2024-10-10T09:28:33.914+00:00","status":500,"error":"Internal Server Error","path":"/debit"}"
at org.springframework.web.client.HttpServerErrorException.create(HttpServerErrorException.java:103) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:186) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:137) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:915) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:864) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:764) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.client.RestTemplate.postForObject(RestTemplate.java:481) ~[spring-web-6.0.4.jar:6.0.4]
at demo.gagaduck.orderservice.feign.AccountService.debit(AccountService.java:23) ~[classes/:na]
at demo.gagaduck.orderservice.service.OrderService.createOrder(OrderService.java:41) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:569) ~[na:na]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:752) ~[spring-aop-6.0.4.jar:6.0.4]
at io.seata.spring.annotation.GlobalTransactionalInterceptor$2.execute(GlobalTransactionalInterceptor.java:204) ~[seata-all-1.7.0-native-rc2.jar:1.7.0-native-rc2]
at io.seata.tm.api.TransactionalTemplate.execute(TransactionalTemplate.java:130) ~[seata-all-1.7.0-native-rc2.jar:1.7.0-native-rc2]
at io.seata.spring.annotation.GlobalTransactionalInterceptor.handleGlobalTransaction(GlobalTransactionalInterceptor.java:201) ~[seata-all-1.7.0-native-rc2.jar:1.7.0-native-rc2]
at io.seata.spring.annotation.GlobalTransactionalInterceptor.invoke(GlobalTransactionalInterceptor.java:171) ~[seata-all-1.7.0-native-rc2.jar:1.7.0-native-rc2]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:752) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:703) ~[spring-aop-6.0.4.jar:6.0.4]
at demo.gagaduck.orderservice.service.OrderService$$SpringCGLIB$$0.createOrder(<generated>) ~[classes/:na]
at demo.gagaduck.orderservice.controller.TestController.createOrder(TestController.java:23) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:569) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:207) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:152) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:884) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1080) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:973) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1011) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:914) ~[spring-webmvc-6.0.4.jar:6.0.4]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:731) ~[tomcat-embed-core-10.1.5.jar:6.0]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) ~[spring-webmvc-6.0.4.jar:6.0.4]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:814) ~[tomcat-embed-core-10.1.5.jar:6.0]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:223) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.4.jar:6.0.4]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.4.jar:6.0.4]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.4.jar:6.0.4]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:177) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:119) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:400) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:859) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1734) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]

2024-10-10T17:29:22.876+08:00 ERROR 26256 --- [ing.grpc.redo.0] com.alibaba.nacos.common.remote.client : Send request fail, request = InstanceRequest{headers={accessToken=eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJuYWNvcyIsImV4cCI6MTcyODU3MDQ2MX0.qcV9lq74W9i45sPBsU2P58n_TvF9qICzC5XS2gDKSJjXN5u8tlTTtEumrM-NvuxI, app=unknown}, requestId='null'}, retryTimes = 1, errorMessage = Client not connected, current status:UNHEALTHY
2024-10-10T17:29:22.904+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Success to connect a server [localhost:8848], connectionId = 1728552562785_127.0.0.1_60756
2024-10-10T17:29:22.904+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Abandon prev connection, server is localhost:8848, connectionId is 1728552513905_127.0.0.1_60710
2024-10-10T17:29:22.904+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : Close current connection 1728552513905_127.0.0.1_60710
2024-10-10T17:29:22.904+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Notify disconnected event to listeners
2024-10-10T17:29:22.904+08:00 WARN 26256 --- [t.remote.worker] com.alibaba.nacos.client.naming : Grpc connection disconnect, mark to redo
2024-10-10T17:29:22.904+08:00 WARN 26256 --- [t.remote.worker] com.alibaba.nacos.client.naming : mark to redo completed
2024-10-10T17:29:22.904+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Notify connected event to listeners.
2024-10-10T17:29:22.904+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.client.naming : Grpc connection connect
2024-10-10T17:29:22.980+08:00 ERROR 26256 --- [ing.grpc.redo.0] com.alibaba.nacos.common.remote.client : Send request fail, request = InstanceRequest{headers={accessToken=eyJhbGciOiJIUzM4NCJ9.eyJzdWIiOiJuYWNvcyIsImV4cCI6MTcyODU3MDQ2MX0.qcV9lq74W9i45sPBsU2P58n_TvF9qICzC5XS2gDKSJjXN5u8tlTTtEumrM-NvuxI, app=unknown}, requestId='null'}, retryTimes = 2, errorMessage = Client not connected, current status:UNHEALTHY
2024-10-10T17:29:22.980+08:00 ERROR 26256 --- [ing.grpc.redo.0] com.alibaba.nacos.client.naming : Redo instance operation REGISTER for DEFAULT_GROUP@@OrderService failed.

com.alibaba.nacos.api.exception.NacosException: Client not connected, current status:UNHEALTHY
at com.alibaba.nacos.common.remote.client.RpcClient.request(RpcClient.java:643) ~[nacos-client-2.2.1.jar:na]
at com.alibaba.nacos.common.remote.client.RpcClient.request(RpcClient.java:623) ~[nacos-client-2.2.1.jar:na]
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.requestToServer(NamingGrpcClientProxy.java:357) ~[nacos-client-2.2.1.jar:na]
at com.alibaba.nacos.client.naming.remote.gprc.NamingGrpcClientProxy.doRegisterService(NamingGrpcClientProxy.java:210) ~[nacos-client-2.2.1.jar:na]
at com.alibaba.nacos.client.naming.remote.gprc.redo.RedoScheduledTask.processRegisterRedoType(RedoScheduledTask.java:102) ~[nacos-client-2.2.1.jar:na]
at com.alibaba.nacos.client.naming.remote.gprc.redo.RedoScheduledTask.redoForInstance(RedoScheduledTask.java:79) ~[nacos-client-2.2.1.jar:na]
at com.alibaba.nacos.client.naming.remote.gprc.redo.RedoScheduledTask.redoForInstances(RedoScheduledTask.java:61) ~[nacos-client-2.2.1.jar:na]
at com.alibaba.nacos.client.naming.remote.gprc.redo.RedoScheduledTask.run(RedoScheduledTask.java:51) ~[nacos-client-2.2.1.jar:na]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) ~[na:na]
at java.base/java.util.concurrent.FutureTask.runAndReset$$$capture(FutureTask.java:305) ~[na:na]
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java) ~[na:na]
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]

2024-10-10T17:29:22.983+08:00 INFO 26256 --- [t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308] Server check success, currentServer is localhost:8848
2024-10-10T17:29:25.982+08:00 INFO 26256 --- [ing.grpc.redo.0] com.alibaba.nacos.client.naming : Redo instance operation REGISTER for DEFAULT_GROUP@@OrderService
1
2
3
4
5
// ProductService
2024-10-10T17:28:55.850+08:00 INFO 4700 --- [h_RMROLE_1_7_32] i.s.c.r.p.c.RmBranchRollbackProcessor : rm handle branch rollback process:xid=172.17.0.6:8091:54617088302346381,branchId=54617088302346382,branchType=AT,resourceId=jdbc:mysql://localhost:3306/seatademo,applicationData=null
2024-10-10T17:28:55.850+08:00 INFO 4700 --- [h_RMROLE_1_7_32] io.seata.rm.AbstractRMHandler : Branch Rollbacking: 172.17.0.6:8091:54617088302346381 54617088302346382 jdbc:mysql://localhost:3306/seatademo
2024-10-10T17:28:55.871+08:00 INFO 4700 --- [h_RMROLE_1_7_32] i.s.r.d.undo.AbstractUndoLogManager : xid 172.17.0.6:8091:54617088302346381 branch 54617088302346382, undo_log deleted with GlobalFinished
2024-10-10T17:28:55.872+08:00 INFO 4700 --- [h_RMROLE_1_7_32] io.seata.rm.AbstractRMHandler : Branch Rollbacked result: PhaseTwo_Rollbacked
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
// AccountService
2024-10-10T17:28:33.905+08:00 ERROR 5772 --- [nio-8081-exec-3] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: java.lang.RuntimeException: 余额不足] with root cause

java.lang.RuntimeException: 余额不足
at demo.gagaduck.accountservice.service.AccountService.debit(AccountService.java:21) ~[classes/:na]
at jdk.internal.reflect.GeneratedMethodAccessor21.invoke(Unknown Source) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:569) ~[na:na]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:752) ~[spring-aop-6.0.4.jar:6.0.4]
at io.seata.spring.annotation.GlobalTransactionalInterceptor$2.execute(GlobalTransactionalInterceptor.java:204) ~[seata-all-1.7.0-native-rc2.jar:1.7.0-native-rc2]
at io.seata.tm.api.TransactionalTemplate.execute(TransactionalTemplate.java:130) ~[seata-all-1.7.0-native-rc2.jar:1.7.0-native-rc2]
at io.seata.spring.annotation.GlobalTransactionalInterceptor.handleGlobalTransaction(GlobalTransactionalInterceptor.java:201) ~[seata-all-1.7.0-native-rc2.jar:1.7.0-native-rc2]
at io.seata.spring.annotation.GlobalTransactionalInterceptor.invoke(GlobalTransactionalInterceptor.java:171) ~[seata-all-1.7.0-native-rc2.jar:1.7.0-native-rc2]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:752) ~[spring-aop-6.0.4.jar:6.0.4]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:703) ~[spring-aop-6.0.4.jar:6.0.4]
at demo.gagaduck.accountservice.service.AccountService$$SpringCGLIB$$0.debit(<generated>) ~[classes/:na]
at demo.gagaduck.accountservice.controller.TestController.debit(TestController.java:20) ~[classes/:na]
at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:569) ~[na:na]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:207) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:152) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:884) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1080) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:973) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1011) ~[spring-webmvc-6.0.4.jar:6.0.4]
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:914) ~[spring-webmvc-6.0.4.jar:6.0.4]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:731) ~[tomcat-embed-core-10.1.5.jar:6.0]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885) ~[spring-webmvc-6.0.4.jar:6.0.4]
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:814) ~[tomcat-embed-core-10.1.5.jar:6.0]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:223) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.4.jar:6.0.4]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.4.jar:6.0.4]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-6.0.4.jar:6.0.4]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.0.4.jar:6.0.4]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:185) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:158) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:177) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:119) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:357) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:400) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:859) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1734) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-10.1.5.jar:10.1.5]
at java.base/java.lang.Thread.run(Thread.java:840) ~[na:na]

  可以从日志看到,首先,Seata分布式事务管理启动,TransactionManager的单例实例被创建,它是Seata事务管理的核心组件。随后,开始了一个全新的全局事务,并且给出了该事务的唯一标识,172.17.0.6:8091:54617088302346381

  随后,订单服务开始尝试创建新的订单,订单服务执行失败,导致了一个运行时异常。异常原因是因为账户余额扣减失败,这触发了全局事务的回滚。

1
2024-10-10T17:29:22.785+08:00  INFO 26256 --- [nio-8082-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : [172.17.0.6:8091:54617088302346381] rollback status: Finished

  在这里,显示全局事务已经完成了回滚,状态是Finished。

  最后,再去数据库看一下,发现数量又变回了90,回滚了事务。

  值得一提的是,在日志中,显示有一个订单服务与Seata服务器之间的Netty通信通道出现非活动状态,这可能是因为网络问题或其他原因导致的。

1
2024-10-10T17:28:16.302+08:00  INFO 26256 --- [ctor_TMROLE_1_1] i.s.c.r.n.AbstractNettyRemotingClient    : channel inactive: [id: 0x7508a115, L:/127.0.0.1:60620 ! R:/127.0.0.1:8091]

  而后又进行了重新连接:

1
`2024-10-10T17:29:22.765+08:00 INFO 26256[t.remote.worker] com.alibaba.nacos.common.remote.client : [4d4f0036-097f-451a-a833-3f49e0233308]

  其他得可能导致事务不一致得情况也是类似的,比如:

  调用使用了非法的数据,如非法金额:http://localhost:8082/create?userId=1&productId=1&count=2&price=-100

  再比如,调用的时候,库存不够了但是余额够得,也会有相似的回滚情况。

seata的TCC模式

  TCC模式,也就是Try-Confirm-Cancel模式,这是一种经典的分布式事务解决方案,通过为每个分布式操作定义三个阶段,也就是Try阶段、Confirm阶段和Cancel阶段,确保在分布式系统中的数据一致性。

  TCC比起AT来说,需要开发者手动实现Try-Confirm-Cancel逻辑,复杂度更高,而且资源开销也不小,在Try阶段是需要锁定和预留资源的。除此以外,为了保证事务的一致性,Confirm和Cancel阶段都需要保证冥等性,来防止重复执行时出现错误,也是一个缺点。

  但是,TCC可以实现精细化的控制,并且非常适合有状态的业务,比如一些需要预留资源、锁定状态的业务场景,比如库存管理、支付系统等等。

  

  

  

  

  

  

  

  

  

  

  

  


学习笔记—微服务—技术栈实践(11)—分布式事务
https://gagaducko.github.io/2024/10/10/学习笔记—微服务—技术栈实践-11-—分布式事务/
作者
gagaduck
发布于
2024年10月10日
许可协议