Ziye Wang, Yiran Qin, Lin Zeng, Ruimao Zhang# (# corresponding author)
The Thirteenth International Conference on Learning Representations (ICLR) 2025 Oral
Weather nowcasting is an essential task that involves predicting future radar echo sequences based on current observations, offering significant benefits for disaster management, transportation, and urban planning. Current prediction methods are limited by training and storage efficiency, mainly focusing on 2D spatial predictions at specific altitudes. Meanwhile, 3D volumetric predictions at each timestamp remain largely unexplored. To address such a challenge, we introduce a comprehensive framework for 3D radar sequence prediction in weather nowcasting, using the newly proposed SpatioTemporal Coherent Gaussian Splatting (STC-GS) for dynamic radar representation and GauMamba for efficient and accurate forecasting. Specifically, rather than relying on a 4D Gaussian for dynamic scene reconstruction, STC-GS optimizes 3D scenes at each frame by employing a group of Gaussians while effectively capturing their movements across consecutive frames. It ensures consistent tracking of each Gaussian over time, making it particularly effective for prediction tasks. With the temporally correlated Gaussian groups established, we utilize them to train GauMamba, which integrates a memory mechanism into the Mamba framework. This allows the model to learn the temporal evolution of Gaussian groups while efficiently handling a large volume of Gaussian tokens. As a result, it achieves both efficiency and accuracy in forecasting a wide range of dynamic meteorological radar signals. The experimental results demonstrate that our STC-GS can efficiently represent 3D radar sequences with over $16 \times$ higher spatial resolution compared with the existing 3D representation methods, while GauMamba outperforms state-of-the-art methods in forecasting a broad spectrum of high-dynamic weather conditions.
Ziye Wang, Yiran Qin, Lin Zeng, Ruimao Zhang# (# corresponding author)
The Thirteenth International Conference on Learning Representations (ICLR) 2025 Oral
Weather nowcasting is an essential task that involves predicting future radar echo sequences based on current observations, offering significant benefits for disaster management, transportation, and urban planning. Current prediction methods are limited by training and storage efficiency, mainly focusing on 2D spatial predictions at specific altitudes. Meanwhile, 3D volumetric predictions at each timestamp remain largely unexplored. To address such a challenge, we introduce a comprehensive framework for 3D radar sequence prediction in weather nowcasting, using the newly proposed SpatioTemporal Coherent Gaussian Splatting (STC-GS) for dynamic radar representation and GauMamba for efficient and accurate forecasting. Specifically, rather than relying on a 4D Gaussian for dynamic scene reconstruction, STC-GS optimizes 3D scenes at each frame by employing a group of Gaussians while effectively capturing their movements across consecutive frames. It ensures consistent tracking of each Gaussian over time, making it particularly effective for prediction tasks. With the temporally correlated Gaussian groups established, we utilize them to train GauMamba, which integrates a memory mechanism into the Mamba framework. This allows the model to learn the temporal evolution of Gaussian groups while efficiently handling a large volume of Gaussian tokens. As a result, it achieves both efficiency and accuracy in forecasting a wide range of dynamic meteorological radar signals. The experimental results demonstrate that our STC-GS can efficiently represent 3D radar sequences with over $16 \times$ higher spatial resolution compared with the existing 3D representation methods, while GauMamba outperforms state-of-the-art methods in forecasting a broad spectrum of high-dynamic weather conditions.
Ziye Wang, Xutao Li#, Kenghong Lin, Chuyao Luo, Yunming Ye, Xiuqing Hu (# corresponding author)
IEEE Transactions on Geoscience and Remote Sensing (TGRS) 2024
Passive microwave (PMW) radiometers have been widely utilized for quantitative precipitation estimation (QPE) by leveraging the relationship between brightness temperature (Tb) and rain rate. Nevertheless, accurate precipitation estimation remains a challenge due to the intricate relationship between them, which is influenced by a diverse range of complex atmospheric and surface properties. Additionally, the inherent skew distribution of rainfall values prevents models from correctly addressing extreme precipitation events, leading to a significant underestimation. This paper presents a novel model called the Multi-Scale and Multi-Level Feature Fusion Network (MSMLNet), consisting of two essential components: a multi-scale feature extractor and a multi-level regression predictor. The feature extractor is specifically designed to extract characteristics from multiple scales, enabling the model to incorporate various meteorological conditions, as well as atmospheric and surface information in the surrounding environment. The regression predictor first assesses the probabilities of multiple rainfall levels for each observed pixel and then extracts features of different levels separately. The multi-level features are fused according to the predicted probabilities. This approach allows each sub-module only to focus on a specific range of precipitation, avoiding the undesirable effects of skew distributions. To evaluate the performance of MSMLNet, various deep learning methods are adapted for the precipitation retrieval task, and a PWM-based product from the Global Precipitation Measurement (GPM) mission is also used for comparison. Extensive experiments show that MSMLNet surpasses GMI-based products and the most advanced deep learning approaches by 17.9\% and 2.5\% in RMSE, and 54.2\% and 4.0\% in CSI-10, respectively. Moreover, we demonstrate that MSMLNet significantly mitigates the propensity for underestimating heavy precipitation events and has a consistent and outstanding performance in estimating precipitation across various levels.
Ziye Wang, Xutao Li#, Kenghong Lin, Chuyao Luo, Yunming Ye, Xiuqing Hu (# corresponding author)
IEEE Transactions on Geoscience and Remote Sensing (TGRS) 2024
Passive microwave (PMW) radiometers have been widely utilized for quantitative precipitation estimation (QPE) by leveraging the relationship between brightness temperature (Tb) and rain rate. Nevertheless, accurate precipitation estimation remains a challenge due to the intricate relationship between them, which is influenced by a diverse range of complex atmospheric and surface properties. Additionally, the inherent skew distribution of rainfall values prevents models from correctly addressing extreme precipitation events, leading to a significant underestimation. This paper presents a novel model called the Multi-Scale and Multi-Level Feature Fusion Network (MSMLNet), consisting of two essential components: a multi-scale feature extractor and a multi-level regression predictor. The feature extractor is specifically designed to extract characteristics from multiple scales, enabling the model to incorporate various meteorological conditions, as well as atmospheric and surface information in the surrounding environment. The regression predictor first assesses the probabilities of multiple rainfall levels for each observed pixel and then extracts features of different levels separately. The multi-level features are fused according to the predicted probabilities. This approach allows each sub-module only to focus on a specific range of precipitation, avoiding the undesirable effects of skew distributions. To evaluate the performance of MSMLNet, various deep learning methods are adapted for the precipitation retrieval task, and a PWM-based product from the Global Precipitation Measurement (GPM) mission is also used for comparison. Extensive experiments show that MSMLNet surpasses GMI-based products and the most advanced deep learning approaches by 17.9\% and 2.5\% in RMSE, and 54.2\% and 4.0\% in CSI-10, respectively. Moreover, we demonstrate that MSMLNet significantly mitigates the propensity for underestimating heavy precipitation events and has a consistent and outstanding performance in estimating precipitation across various levels.