背景:
在排查无忧行后台服务日志时,一个请求可能会包含多个执行的sql,经常会遇到SQL语句与对应参数不连续显示,或者参数较多需要逐个匹配的情况。这种情况下,如果需要还原完整SQL语句就会比较耗时。因此,我希望能优化这一流程。
正文:
在平时排查sql执行情况时,对于sql查询参数比较多或者多个sql在同一个请求下执行的时候会比较麻烦,需要查看每个参数的值一一对应检查是否有异常,有时候还需要将查询sql的问号还原去库里面查询结果,就类似下面的日志:
一个一个参数还原比较麻烦,因此针对这种问题,我找到了一个解决办法,重写MybatisPlus的InnerInterceptor,将sql日志重新组装,那么在查看sql日志时候就可以直接查看赋值好参数的sql了,将上面截图的sql整合之后可以转化为下面的sql:
具体的思路以及实现方案,以无忧行项目为例
实现思路,利用mybatisplus提供的接口InnerInterceptor,重写sql日志,并将重写之后的类注入到sqlSessionFactory中,在执行sql之前捕获到并打印。
解释一下InnerInterceptor类:InnerInterceptor 是 MyBatis-Plus 框架中的一个核心拦截器接口,它是 MyBatis 拦截器机制的扩展,用于在 SQL 执行过程中进行拦截和增强。
InnerInterceptor工作原理:InnerInterceptor 通过 MyBatis 的插件机制工作,在以下时机进行拦截:
- beforeQuery:查询操作执行前
- beforeUpdate:更新操作执行前
- beforePrepare:SQL 准备阶段
针对查询sql执行的sql重整,使用的是beforeQuery,针对更新sql执行的重整,使用的是beforeUpdate,以下是具体实现:
1.首先,添加依赖,
maven依赖:
<dependency><groupId>com.baomidou</groupId><artifactId>mybatis-plus-boot-starter</artifactId><version>3.4.0</version>
</dependency>
或者gradle依赖:
compile("com.baomidou:mybatis-plus-generator:3.4.0")
2.重写InnerInterceptor
package com.cmi.jego.micro.flight.admin.service.config;import com.baomidou.mybatisplus.extension.plugins.inner.InnerInterceptor;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.ibatis.executor.Executor;
import org.apache.ibatis.mapping.BoundSql;
import org.apache.ibatis.mapping.MappedStatement;
import org.apache.ibatis.mapping.ParameterMapping;
import org.apache.ibatis.reflection.MetaObject;
import org.apache.ibatis.session.Configuration;
import org.apache.ibatis.session.ResultHandler;
import org.apache.ibatis.session.RowBounds;
import org.apache.ibatis.type.TypeHandlerRegistry;import java.sql.SQLException;
import java.text.DateFormat;
import java.util.Date;
import java.util.List;
import java.util.Locale;
import java.util.regex.Matcher;/*** sql日志重写** @author zhouxy* @date 2025年05月08日 11:27*/
@Slf4j
@org.springframework.context.annotation.Configuration
public class MybatisPlusLogRewrite implements InnerInterceptor {@Overridepublic void beforeQuery(Executor executor, MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) throws SQLException {log.info("beforeQuery");logInfo(boundSql, ms, parameter);}@Overridepublic void beforeUpdate(Executor executor, MappedStatement ms, Object parameter) throws SQLException {log.info("beforeUpdate");BoundSql boundSql = ms.getBoundSql(parameter);logInfo(boundSql, ms, parameter);}private static void logInfo(BoundSql boundSql, MappedStatement ms, Object parameter) {try {// 获取到节点的id,即sql语句的idString sqlId = ms.getId();// 获取节点的配置Configuration configuration = ms.getConfiguration();// 获取到最终的sql语句String sql = getSql(configuration, boundSql, sqlId);log.info("完整的sql:{}", sql);} catch (Exception e) {log.error("异常:{}", e.getLocalizedMessage(), e);}}// 封装了一下sql语句,使得结果返回完整xml路径下的sql语句节点id + sql语句public static String getSql(Configuration configuration, BoundSql boundSql, String sqlId) {return sqlId + ":" + showSql(configuration, boundSql);}// 进行?的替换public static String showSql(Configuration configuration, BoundSql boundSql) {// 获取参数Object parameterObject = boundSql.getParameterObject();List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
// log.info("boundSql:{}",boundSql.getSql());// sql语句中多个空格都用一个空格代替String sql = boundSql.getSql().replaceAll("[\\s]+", " ");if (CollectionUtils.isNotEmpty(parameterMappings) && parameterObject != null) {// 获取类型处理器注册器,类型处理器的功能是进行java类型和数据库类型的转换TypeHandlerRegistry typeHandlerRegistry = configuration.getTypeHandlerRegistry();// 如果根据parameterObject.getClass()可以找到对应的类型,则替换if (typeHandlerRegistry.hasTypeHandler(parameterObject.getClass())) {sql = sql.replaceFirst("\\?",Matcher.quoteReplacement(getParameterValue(parameterObject)));} else {// MetaObject主要是封装了originalObject对象,提供了get和set的方法用于获取和设置originalObject的属性值,主要支持对JavaBean、Collection、Map三种类型对象的操作MetaObject metaObject = configuration.newMetaObject(parameterObject);for (ParameterMapping parameterMapping : parameterMappings) {String propertyName = parameterMapping.getProperty();if (metaObject.hasGetter(propertyName)) {Object obj = metaObject.getValue(propertyName);sql = sql.replaceFirst("\\?",Matcher.quoteReplacement(getParameterValue(obj)));} else if (boundSql.hasAdditionalParameter(propertyName)) {// 该分支是动态sqlObject obj = boundSql.getAdditionalParameter(propertyName);sql = sql.replaceFirst("\\?",Matcher.quoteReplacement(getParameterValue(obj)));} else {// 打印出缺失,提醒该参数缺失并防止错位sql = sql.replaceFirst("\\?", "缺失");}}}}return sql;}// 如果参数是String,则添加单引号, 如果是日期,则转换为时间格式器并加单引号; 对参数是null和不是null的情况作了处理private static String getParameterValue(Object obj) {String value;if (obj instanceof String) {value = "'" + obj.toString() + "'";} else if (obj instanceof Date) {DateFormat formatter = DateFormat.getDateTimeInstance(DateFormat.DEFAULT,DateFormat.DEFAULT, Locale.CHINA);value = "'" + formatter.format(new Date()) + "'";} else {if (obj != null) {value = obj.toString();} else {value = "";}}return value;}
}
3.将当前的重写类MybatisPlusLogRewrite注入到sqlSessionFactory中。
package com.cmi.jego.micro.flight.admin.service.config;import com.baomidou.mybatisplus.extension.plugins.MybatisPlusInterceptor;
import org.apache.ibatis.session.SqlSessionFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;import javax.annotation.PostConstruct;
import java.util.List;/*** @author zhouxy* @date 2025年05月09日 15:58*/
@Configuration
public class MyBatisPlusConfig {@Autowiredprivate List<SqlSessionFactory> sqlSessionFactoryList;@Autowiredprivate MybatisPlusLogRewrite myInnerInterceptor;/*** 添加Mybatis拦截器** @author zhouxy* @date 2025/5/15 16:29*/@PostConstructpublic void addMybatisInterceptor() {for (SqlSessionFactory sqlSessionFactory : sqlSessionFactoryList) {org.apache.ibatis.session.Configuration configuration = sqlSessionFactory.getConfiguration();//将sql拦截器添加到MybatisPlusInterceptor拦截器链MybatisPlusInterceptor mybatisPlusInterceptor = new MybatisPlusInterceptor();mybatisPlusInterceptor.addInnerInterceptor(myInnerInterceptor);configuration.addInterceptor(mybatisPlusInterceptor);}}
}
关闭其他sql打印日志,执行查询sql,只打印当前重写之后的sql日志如下:
2025-05-14 18:03:01.353 INFO 34718 --- [ main] c.c.j.m.f.a.service.handler.LogHandler : function:[OrderRefundServiceImpl.queryOrderListStatusCount], request: {"lang":"zh_CN","pageNum":1,"pageSize":10}
2025-05-14 18:03:01.385 INFO 34718 --- [ main] c.c.j.m.f.a.s.c.MybatisPlusLogRewrite : beforeQuery
2025-05-14 18:03:01.387 INFO 34718 --- [ main] c.c.j.m.f.a.s.c.MybatisPlusLogRewrite : 完整的sql:com.cmi.jego.micro.flight.admin.service.mapper.jegotrip.TicketRefundMapper.queryListStatusCount:select distinct(tr.id) as refundId, tr.status from ticket_refund tr left join ticket_order td on tr.order_id = td.order_id left join ticket_passenger tp on tr.order_id = tp.order_id left join ticket_order_detail tod on tr.order_id = tod.order_id WHERE ( td.search_channel in ( 'hbgj-api-domestic-custom' ) or (td.order_source is null and td.search_channel is null) or td.order_source = '0' )
2025-05-14 18:03:01.395 WARN 34718 --- [ main] c.a.druid.pool.DruidAbstractDataSource : discard long time none received connection. , jdbcUrl : jdbc:mysql://localhost:3306/ticket_jegotrip?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&serverTimezone=GMT%2B8, version : 1.2.5, lastPacketReceivedIdleMillis : 263261
2025-05-14 18:03:01.395 WARN 34718 --- [ main] c.a.druid.pool.DruidAbstractDataSource : discard long time none received connection. , jdbcUrl : jdbc:mysql://localhost:3306/ticket_jegotrip?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&serverTimezone=GMT%2B8, version : 1.2.5, lastPacketReceivedIdleMillis : 263912
2025-05-14 18:03:01.396 WARN 34718 --- [ main] c.a.druid.pool.DruidAbstractDataSource : discard long time none received connection. , jdbcUrl : jdbc:mysql://localhost:3306/ticket_jegotrip?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&serverTimezone=GMT%2B8, version : 1.2.5, lastPacketReceivedIdleMillis : 263933
2025-05-14 18:03:01.396 WARN 34718 --- [ main] c.a.druid.pool.DruidAbstractDataSource : discard long time none received connection. , jdbcUrl : jdbc:mysql://localhost:3306/ticket_jegotrip?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&serverTimezone=GMT%2B8, version : 1.2.5, lastPacketReceivedIdleMillis : 263958
2025-05-14 18:03:01.396 WARN 34718 --- [ main] c.a.druid.pool.DruidAbstractDataSource : discard long time none received connection. , jdbcUrl : jdbc:mysql://localhost:3306/ticket_jegotrip?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&serverTimezone=GMT%2B8, version : 1.2.5, lastPacketReceivedIdleMillis : 263986
2025-05-14 18:03:01.851 INFO 34718 --- [ main] c.c.j.m.f.a.s.c.MybatisPlusLogRewrite : beforeQuery
2025-05-14 18:03:01.851 INFO 34718 --- [ main] c.c.j.m.f.a.s.c.MybatisPlusLogRewrite : 完整的sql:com.cmi.jego.micro.flight.admin.service.mapper.jegotrip.TicketRefundMoneyMapper.queryFailedByRefundIds:SELECT id , order_id, user_id, refund_id, pay_order_id, supplier_order_id, type, price, refund_price, status, refund_time, limit_time, refund_no, create_time, update_time, remark FROM ticket_refund_money WHERE refund_id IN ( 1295 , 1296 , 1423 , 1424 ) AND status = 4
2025-05-14 18:03:01.871 INFO 34718 --- [ main] c.c.j.m.f.a.service.handler.LogHandler : class:[OrderRefundServiceImpl.queryOrderListStatusCount] response: {"bizCode":0,"bizMsg":"SUCCESS","data":{"statusCounts":[{"count":1,"status":-1,"statusDesc":"已取消"},{"count":2,"status":3,"statusDesc":"已退票退款"},{"count":1,"status":4,"statusDesc":"退票异常"}]},"rpcCode":0,"rpcMsg":"SUCCESS"}, cost: 521,
2025-05-14 18:03:01.876 INFO 34718 --- [ main] c.c.j.m.f.a.s.s.OrderRefundServiceTest : {"bizCode":0,"bizMsg":"SUCCESS","data":{"statusCounts":[{"count":1,"status":-1,"statusDesc":"已取消"},{"count":2,"status":3,"statusDesc":"已退票退款"},{"count":1,"status":4,"statusDesc":"退票异常"}]},"rpcCode":0,"rpcMsg":"SUCCESS"}
性能评估方面
重写InnerInterceptor类其实是在SqlSessionFactory中添加了拦截器,那么性能如何呢?下面是我这边实际测试的时长对比
不添加sql重写:
添加sql重写:
根据测试时间可以看出,加了sql重写会比未加的时候的时间长了11ms。
因为涉及到拦截以及反射,性能比不加的时候会稍微耗时一些,
如果是访问量不高的情况下,用户对于加没加sql重写的主观感应时长其实差不多,因此这种sql重写方案适合后台访问并发量不高的情况使用。
如果是并发量较高且比较注重性能的情况下,不建议加。