基于自注意力机制与 1D-CNN 的变压器故障诊断方法
DOI:
作者:
作者单位:

作者简介:

通讯作者:

基金项目:


Transformer Fault Diagnosis Method Based on Self-attention Mechanism and 1D-CNN
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    目的 变压器是电力系统中重要的设备,其发生故障时能够被有效地判别出故障类别,使得电力检修效率提 升,这对电网的安全运行具有重要意义。 针对电网电力检修中出现的变压器故障判别精度不足这一问题,提出了 基于自注意力机制与 1D-CNN 的变压器故障诊断方法。 常规卷积在处理 DGA 气体样本数据时容易损失特征信 息,导致故障诊断的准确率偏低,论文将自注意力机制与 1D-CNN 结合,有效改善了上述问题,提高了变压器故障 诊断的准确率和可靠性。 方法 为减少卷积网络提取到的特征提取信息在模型层间传播时造成的损失,论文在 1D- CNN 的基础上使用 LeakyReLU 函数替代原模型中的 ReLU 激活函数,相比于 ReLU 激活方式下很多神经元都没有 被激活,LeakyReLU 可以降低模型的稀疏性,使得网络特征信息多样性增加。 自注意力机制可实现对变压器油中 溶解气体数据的特征信息加权处理,实现了有效特征信息增强作用,采用动态衰减学习率策略对优化器进行优化。 结果 所提的方法损失率可降低至 0. 078,相比于无动态衰减学习率和 ReLU 激活方式,损失率分别降低了 44. 7%和 38. 6%;诊断准确率可达到 93. 79%,较 1D-CNN 和 GOA-BP 方法诊断准确率提高了 0. 36%和 2. 12%。 结论 算例 仿真验证了所提方法的有效性和优越性,表明基于自注意力机制与 1D-CNN 的变压器故障诊断方法能有效提高诊 断的准确率,降低模型的损失率。

    Abstract:

    Objective Transformers are important equipment in the power system. Effective identification of fault categories when transformers fail can improve the efficiency of power maintenance which is of great significance for the safe operation of the power grid. In response to low accuracy in transformer fault identification in power grid maintenance this paper proposed a transformer fault diagnosis method based on a self-attention mechanism and 1D-CNN. Conventional convolution often loses feature information when processing DGA gas sample data resulting in low accuracy in fault diagnosis. By combining the self-attention mechanism with 1D-CNN this paper effectively addressed the above issues improving the accuracy and reliability of transformer fault diagnosis. Methods To reduce the loss of feature extraction information during inter-layer propagation in convolutional networks this paper replaced the ReLU activation function in the original model with the LeakyReLU function. Compared with ReLU activation where many neurons are not activated LeakyReLU can reduce the sparsity of the model and increase the diversity of network feature information. The self- attention mechanism can weight the feature information of dissolved gas data in transformer oil realizing effective feature information enhancement. A dynamic decay learning rate strategy was used to optimize the optimizer. Results The proposed method reduced the loss rate to 0. 078 a decrease of 44. 7% and 38. 6% compared with models without dynamic decay learning rate and ReLU activation respectively. The diagnostic accuracy can reach 93. 79% an increase of 0. 36% and 2. 12% compared with 1D-CNN and GOA-BP methods respectively. Conclusion Case study simulations validate the effectiveness and superiority of the proposed method demonstrating that the transformer fault diagnosis method based on the self-attention mechanism and 1D-CNN can effectively improve diagnostic accuracy and reduce model loss.

    参考文献
    相似文献
    引证文献
引用本文

刘国柱.基于自注意力机制与 1D-CNN 的变压器故障诊断方法[J].重庆工商大学学报(自然科学版),2025,(1):72-78
LIU Guozhu. Transformer Fault Diagnosis Method Based on Self-attention Mechanism and 1D-CNN[J]. Journal of Chongqing Technology and Business University(Natural Science Edition),2025,(1):72-78

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-12-24
×
2024年《重庆工商大学学报(自然科学版)》影响因子显著提升