基于Fusion神经网络模型的气泡超分辨率重建研究

Research on Bubble Super-resolution Reconstruction Based on Fusion Neural Network Model

  • 摘要: 高分辨率气泡场信息在核反应堆热工水力领域具有重要工程价值,然而现有实验测量与数值模拟方法在获取精细气泡分布时面临显著挑战。本文针对气泡图像超分辨率重建问题,提出一种融合多尺度特征的神经网络模型——Fusion模型,旨在从超低分辨率(16×16像素)输入中恢复高分辨率(128×128像素)气泡分布图像。该模型通过3组子网络融合与跳跃连接结构,兼顾全局结构与局部细节特征,显著提升了复杂气泡分布的恢复能力。实验结果表明:在单气泡场景下,Fusion模型的均方误差(MSE)为0.001,较传统双三次插值降低75%,结构相似性指数(SSIM)达0.993 9;在多气泡场景中,Fusion模型的MSE(0.020 3)较传统双三次插值降低79.5%,SSIM(0.899 2)提升近3倍,且在不同气泡密度下均表现出优异的鲁棒性。训练集规模分析显示,当样本量超过5 000对时模型性能趋于稳定。研究进一步验证了数据驱动方法在气泡场表征中的有效性,可为气液两相流研究提供新的技术手段。

     

    Abstract: Accurate prediction of bubble spatial distribution in gas-liquid two-phase flow is of vital importance for the design, operation, and safety evaluation of nuclear power equipment. However, existing experimental techniques such as wire-mesh sensors and high-speed imaging systems, as well as numerical simulations based on Euler-Euler or VOF models, often suffer from limited resolution, low temporal-spatial clarity, or high computational cost, making it challenging to obtain detailed bubble distribution information. To address these limitations, this study proposes a neural network model that integrates multi-scale features—dubbed the Fusion model—for the super-resolution reconstruction of bubble images. The proposed model aims to reconstruct high-resolution bubble distribution images (128×128 pixels) from ultra-low-resolution inputs (16×16 pixels), simulating scenarios with extremely coarse measurement or simulation resolution. The Fusion model integrates three subnetworks with skip-connections in a hybrid downsampled skip-connection/multi-scale structure. This model structure allows simultaneous extraction of global structural features and local boundary details, thereby improving the fidelity of reconstructed bubble contours and internal textures. The training dataset was constructed using synthetically generated bubble images with varied bubble sizes (diameters ranging from 10 mm to 30 mm) and spatial distributions, ensuring coverage of representative scenarios in industrial bubble columns. Each sample pair includes a high-resolution image and its corresponding low-resolution counterpart obtained via average pooling. The training process was conducted on an NVIDIA RTX 4090 GPU using the mean squared error (MSE) loss function, with convergence typically achieved within 2.1 h for the Fusion model. Comparative experiments were conducted against two baseline models: a single-bubble enhanced CNN and a multi-scale network without feature integration. In single-bubble cases, the Fusion model achieves a MSE of 0.001, representing a 75% improvement over bicubic interpolation, and a structural similarity index (SSIM) of 0.993 9. In multi-bubble scenarios, the model reduces the MSE by 79.5% (to 0.020 3) and improves SSIM nearly threefold (to 0.899 2), demonstrating superior robustness under varying bubble densities. Additionally, training data sensitivity analysis reveals that the model’s performance stabilizes when the number of training samples exceeds 5 000 pairs, indicating reliable generalization capability. In conclusion, this study validates the effectiveness of the Fusion model in restoring fine-scale bubble features from highly degraded inputs. The method holds strong potential for applications in reactor safety assessment, thermal-hydraulic model validation, and indirect sensor signal enhancement. Future work will focus on improving the model’s physical interpretability by incorporating physics-informed priors and validating the model performance against experimental datasets, paving the way toward real-time, high-resolution bubble monitoring in complex flow environments.

     

/

返回文章
返回