diff --git a/docs/index.md b/docs/index.md
index 3b7483d26..bfa57b242 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -163,6 +163,7 @@
| 地震波形反演 | [VelocityGAN 地震波形反演](./zh/examples/velocity_gan.md) | 数据驱动 | VelocityGAN | 监督学习 | [OpenFWI](https://openfwi-lanl.github.io/docs/data.html#vel) | [Paper](https://arxiv.org/abs/1809.10262v6) |
| 遥感图像分割 | [UNetFormer分割图像](./zh/examples/unetformer.md) | 数据驱动 | UNetformer | 监督学习 | [Vaihingen](https://paperswithcode.com/dataset/isprs-vaihingen) | [Paper](https://github.com/WangLibo1995/GeoSeg) |
| 交通预测 | [TGCN 交通流量预测](./zh/examples/tgcn.md) | 数据驱动 | GCN & CNN | 监督学习 | [PEMSD4 & PEMSD8](https://paddle-org.bj.bcebos.com/paddlescience/datasets/tgcn/tgcn_data.zip) | - |
+ | 天气预报 | [Meteoformer 多气象要素预测](./zh/examples/meteoformer.md) | 数据驱动 | Transformer | 监督学习 | [ERA5](https://https://cds.climate.copernicus.eu/datasets/reanalysis-era5-pressure-levels?tab=download) | - |
| 天气预报 | [Preformer 短时降水预测](./zh/examples/preformer.md) | 数据驱动 | Transformer | 监督学习 | [ERA5](https://https://cds.climate.copernicus.eu/datasets/reanalysis-era5-pressure-levels?tab=download) | [Paper](https://ieeexplore.ieee.org/document/10288072) |
| 天气预报 | [Climateformer 气候预测](./zh/examples/climateformer.md) | 数据驱动 | Transformer | 监督学习 | [ERA5](https://https://cds.climate.copernicus.eu/datasets/reanalysis-era5-pressure-levels?tab=download) | - |
| 生成模型| [图像生成中的梯度惩罚应用](./zh/examples/wgan_gp.md)|数据驱动|WGAN GP|监督学习|[Data1](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz)
[Data2](http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz)| [Paper](https://github.com/igul222/improved_wgan_training) |
diff --git a/docs/zh/api/arch.md b/docs/zh/api/arch.md
index e85e08863..c20dac705 100644
--- a/docs/zh/api/arch.md
+++ b/docs/zh/api/arch.md
@@ -43,6 +43,7 @@
- RegDGCNN
- RegPointNet
- IFMMLP
+ - Meteoformer
- Climateformer
- MoleculeModel
- Preformer
diff --git a/docs/zh/api/data/dataset.md b/docs/zh/api/data/dataset.md
index 30bcc0228..3f1aad065 100644
--- a/docs/zh/api/data/dataset.md
+++ b/docs/zh/api/data/dataset.md
@@ -36,6 +36,7 @@
- DrivAerNetPlusPlusDataset
- IFMMoeDataset
- STAFNetDataset
+ - ERA5MeteoDataset
- ERA5ClimateDataset
- MoleculeDatasetIter
- ERA5SQDataset
diff --git a/docs/zh/examples/meteoformer.md b/docs/zh/examples/meteoformer.md
new file mode 100644
index 000000000..fcfed54f0
--- /dev/null
+++ b/docs/zh/examples/meteoformer.md
@@ -0,0 +1,276 @@
+# Meteoformer
+
+开始训练、评估前,请下载[ERA5](https://cds.climate.copernicus.eu/datasets/reanalysis-era5-pressure-levels?tab=download)数据集文件。
+
+开始评估前,请下载或训练生成预训练模型。
+
+用于评估的数据集已保存,可通过以下链接进行下载、评估:
+[ERA5_201601.tar.gz](https://paddle-org.bj.bcebos.com/paddlescience/datasets/meteoformer/ERA5_201601.tar.gz)、
+[mean.nc](https://paddle-org.bj.bcebos.com/paddlescience/datasets/climateformer/mean.nc)、
+[std.nc](https://paddle-org.bj.bcebos.com/paddlescience/datasets/climateformer/std.nc)。
+
+下载或解压完成后,请保持以下目录形式:
+ERA5/
+├── mean.nc
+├── std.nc
+└── 2016/
+ ├── r_2016010100.npy
+ ├── ...
+
+=== "模型训练命令"
+
+ ``` sh
+ python main.py
+ ```
+
+=== "模型评估命令"
+
+ ``` sh
+ python main.py mode=eval EVAL.pretrained_model_path="https://paddle-org.bj.bcebos.com/paddlescience/models/meteoformer/meteoformer.pdparams"
+ ```
+
+## 1. 背景简介
+
+短中期气象预测主要涉及对未来几小时至几天内的天气变化进行预测。这类预测通常需要涵盖多个气象要素,如温度、湿度、风速等,这些要素对气象变化有着复杂的时空依赖关系。准确的短中期气象预测对于防灾减灾、农业生产、航空航天等领域具有重要意义。传统的气象预测模型主要依赖于物理公式和数值天气预报(NWP),但随着深度学习的快速发展,基于数据驱动的模型逐渐展现出更强的预测能力。
+
+为了有效捕捉这些多维时空特征,Meteoformer应运而生。Meteoformer是一种基于Transformer架构的模型,专门针对短中期多气象要素的预测任务进行优化。该模型能够处理多个气象变量的时空依赖关系,采用自注意力机制来捕捉不同时空尺度的关联性,从而实现更准确的温度、湿度、风速等气象要素的多步预测。通过Meteoformer,气象预报可以实现更加高效和精确的多要素预测,为气象服务提供更加可靠的数据支持。
+
+## 2. 模型原理
+
+本章节对 Meteoformer 的模型原理进行简单地介绍。
+
+### 2.1 编码器
+
+该模块使用两层Transformer,提取空间特征更新节点特征:
+
+``` py linenums="243" title="ppsci/arch/meteoformer.py"
+--8<--
+ppsci/arch/meteoformer.py:243:277
+--8<--
+```
+
+### 2.2 演变器
+
+该模块使用两层Transformer,学习全局时间动态特性:
+
+``` py linenums="280" title="ppsci/arch/meteoformer.py"
+--8<--
+ppsci/arch/meteoformer.py:280:325
+--8<--
+```
+
+### 2.3 解码器
+
+该模块使用两层卷积,将时空表征解码为未来多气象要素:
+
+``` py linenums="329" title="ppsci/arch/meteoformer.py"
+--8<--
+ppsci/arch/meteoformer.py:329:344
+--8<--
+```
+
+### 2.4 Meteoformer模型结构
+
+模型的总体结构如图所示:
+
+
+ { loading=lazy style="margin:0 auto"}
+ Meteoformer 模型结构
+
+
+Meteoformer模型首先使用特征嵌入层对输入信号(过去几个时间帧的多气象要素)进行空间特征编码:
+
+``` py linenums="418" title="ppsci/arch/meteoformer.py"
+--8<--
+ppsci/arch/meteoformer.py:418:420
+--8<--
+```
+
+然后模型利用演变器将学习空间特征的动态特性,预测未来几个时间帧的气象特征:
+
+``` py linenums="422" title="ppsci/arch/meteoformer.py"
+--8<--
+ppsci/arch/meteoformer.py:422:425
+--8<--
+```
+
+最后模型将时空动态特性与初始气象底层特征结合,使用两层卷积预测未来短中期内的多气象要素值:
+
+``` py linenums="427" title="ppsci/arch/meteoformer.py"
+--8<--
+ppsci/arch/meteoformer.py:427:429
+--8<--
+```
+
+## 3. 模型训练
+
+### 3.1 数据集介绍
+
+案例中使用了预处理的ERA5Meteo数据集,属于ERA5再分析数据的一个子集。ERA5Meteo包含了全球大气、陆地和海洋的多种变量,研究区域从东经 140° 到西经 70°,从北纬 55° 到赤道,空间分辨率为 0.25°。该数据集从2016年开始到2020年,每小时提供一次天气状况的估计,非常适合用于短中期多气象要素预测等任务。在实际应用过程中,时间间隔选取为1小时。
+
+数据集被保存为 T x C x H x W 的矩阵,记录了相应地点和时间的对应气象要素的值,其中 T 为时间序列长度,C代表通道维,案例中选取了3个不同气压层的温度、相对湿度、东向风速、北向风速等气象信息,H 和 W 代表按照经纬度划分后的矩阵的高度和宽度。根据年份,数据集按照 7:2:1 划分为训练集、验证集,和测试集。案例中预先计算了气象要素数据的均值与标准差,用于后续的正则化操作。
+
+### 3.2 模型训练
+
+#### 3.2.1 模型构建
+
+该案例基于 Meteoformer 模型实现,用 PaddleScience 代码表示如下:
+
+``` py linenums="94" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:94:95
+--8<--
+```
+
+#### 3.2.2 约束器构建
+
+本案例基于数据驱动的方法求解问题,因此需要使用 PaddleScience 内置的 `SupervisedConstraint` 构建监督约束器。在定义约束器之前,需要首先指定约束器中用于数据加载的各个参数。
+
+训练集数据加载的代码如下:
+
+``` py linenums="23" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:23:56
+--8<--
+```
+
+定义监督约束的代码如下:
+
+``` py linenums="58" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:58:64
+--8<--
+```
+
+#### 3.2.3 评估器构建
+
+本案例训练过程中会按照一定的训练轮数间隔,使用验证集评估当前模型的训练情况,需要使用 `SupervisedValidator` 构建评估器。
+
+验证集数据加载的代码如下:
+
+``` py linenums="69" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:69:80
+--8<--
+```
+
+定义监督评估器的代码如下:
+
+``` py linenums="82" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:82:92
+--8<--
+```
+
+#### 3.2.4 学习率与优化器构建
+
+本案例中学习率大小设置为 `1e-3`,优化器使用 `Adam`,用 PaddleScience 代码表示如下:
+
+``` py linenums="97" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:97:102
+--8<--
+```
+
+#### 3.2.5 模型训练
+
+完成上述设置之后,只需要将上述实例化的对象按顺序传递给 `ppsci.solver.Solver`,然后启动训练。
+
+``` py linenums="104" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:104:120
+--8<--
+```
+
+#### 3.2.6 训练时评估
+
+通过设置 `ppsci.solver.Solver` 中的 `eval_during_train` 参数,可以自动保存在验证集上效果最优的模型参数。
+
+``` py linenums="113" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:113:113
+--8<--
+```
+
+### 3.3 评估模型
+
+#### 3.3.1 评估器构建
+
+测试集数据加载的代码如下:
+
+``` py linenums="126" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:126:137
+--8<--
+```
+
+定义监督评估器的代码如下:
+
+``` py linenums="139" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:139:149
+--8<--
+```
+
+与验证集的 `SupervisedValidator` 相似,在这里使用的评价指标是 `MAE` 和 `MSE`。
+
+#### 3.3.2 加载模型并进行评估
+
+设置预训练模型参数的加载路径并加载模型。
+
+``` py linenums="151" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:151:152
+--8<--
+```
+
+实例化 `ppsci.solver.Solver`,然后启动评估。
+
+``` py linenums="154" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py:154:165
+--8<--
+```
+
+## 4. 完整代码
+
+数据集接口:
+
+``` py linenums="1" title="ppsci/data/dataset/era5meteo_dataset.py"
+--8<--
+ppsci/data/dataset/era5meteo_dataset.py
+--8<--
+```
+
+模型结构:
+
+``` py linenums="1" title="ppsci/arch/meteoformer.py"
+--8<--
+ppsci/arch/meteoformer.py
+--8<--
+```
+
+模型训练:
+
+``` py linenums="1" title="examples/meteoformer/main.py"
+--8<--
+examples/meteoformer/main.py
+--8<--
+```
+
+配置文件:
+
+``` py linenums="1" title="examples/meteoformer/conf/meteoformer.yaml"
+--8<--
+examples/meteoformer/conf/meteoformer.yaml
+--8<--
+```
+
+## 5. 结果展示
+
+下图展示了Meteoformer模型在1000 hPa层风速预测任务中的预测结果与真值结果对比。横轴表示不同的预测时间步,时间间隔为1小时,模型一次可预测未来6个时间步。
+
+
+ { loading=lazy style="margin:0 auto;"}
+ Meteoformer模型预测结果("Pred")与真值结果("GT")
+
diff --git a/examples/meteoformer/conf/meteoformer.yaml b/examples/meteoformer/conf/meteoformer.yaml
new file mode 100644
index 000000000..11915011c
--- /dev/null
+++ b/examples/meteoformer/conf/meteoformer.yaml
@@ -0,0 +1,75 @@
+defaults:
+ - ppsci_default
+ - TRAIN: train_default
+ - TRAIN/ema: ema_default
+ - TRAIN/swa: swa_default
+ - EVAL: eval_default
+ - INFER: infer_default
+ - hydra/job/config/override_dirname/exclude_keys: exclude_keys_default
+ - _self_
+
+hydra:
+ run:
+ # dynamic output directory according to running time and override name
+ dir: outputs_meteoformer
+ job:
+ name: ${mode} # name of logfile
+ chdir: false # keep current working directory unchanged
+ callbacks:
+ init_callback:
+ _target_: ppsci.utils.callbacks.InitCallback
+ sweep:
+ # output directory for multirun
+ dir: ${hydra.run.dir}
+ subdir: ./
+
+# general settings
+mode: train # running mode: train/eval
+seed: 1024
+output_dir: ${hydra:run.dir}
+log_freq: 50 # 20
+
+# set training hyper-parameters
+SQ_LEN: 6
+IMG_H: 192
+IMG_W: 256
+USE_SAMPLED_DATA: false
+
+# set train data path
+TRAIN_FILE_PATH: /data/ERA5/
+DATA_MEAN_PATH: /data/ERA5/mean.nc
+DATA_STD_PATH: /data/ERA5/std.nc
+
+# set evaluate data path
+VALID_FILE_PATH: /data/ERA5/
+
+# model settings
+MODEL:
+ input_keys: ["input"]
+ output_keys: ["output"]
+ shape_in:
+ - 6
+ - 12
+ - ${IMG_H}
+ - ${IMG_W}
+
+# training settings
+TRAIN:
+ epochs: 50 # 150
+ save_freq: 5 # 20
+ eval_during_train: true
+ eval_freq: 5 # 20
+ lr_scheduler:
+ epochs: ${TRAIN.epochs}
+ learning_rate: 0.001
+ by_epoch: true
+ batch_size: 8 # 16
+ pretrained_model_path: null
+ checkpoint_path: null
+
+# evaluation settings
+EVAL:
+ pretrained_model_path: null
+ compute_metric_by_batch: true
+ eval_with_no_grad: true
+ batch_size: 8 # 16
diff --git a/examples/meteoformer/main.py b/examples/meteoformer/main.py
new file mode 100644
index 000000000..92983a5e6
--- /dev/null
+++ b/examples/meteoformer/main.py
@@ -0,0 +1,179 @@
+# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import hydra
+import utils as utils
+from omegaconf import DictConfig
+
+import ppsci
+
+
+def train(cfg: DictConfig):
+ # set train dataloader config
+ if not cfg.USE_SAMPLED_DATA:
+ train_dataloader_cfg = {
+ "dataset": {
+ "name": "ERA5MeteoDataset",
+ "file_path": cfg.TRAIN_FILE_PATH,
+ "input_keys": cfg.MODEL.input_keys,
+ "label_keys": cfg.MODEL.output_keys,
+ "size": (cfg.IMG_H, cfg.IMG_W),
+ },
+ "sampler": {
+ "name": "BatchSampler",
+ "drop_last": True,
+ "shuffle": True,
+ },
+ "batch_size": cfg.TRAIN.batch_size,
+ "num_workers": 4,
+ }
+ else:
+ train_dataloader_cfg = {
+ "dataset": {
+ "name": "ERA5SampledDataset",
+ "file_path": cfg.TRAIN_FILE_PATH,
+ "input_keys": cfg.MODEL.input_keys,
+ "label_keys": cfg.MODEL.output_keys,
+ },
+ "sampler": {
+ "name": "DistributedBatchSampler",
+ "drop_last": True,
+ "shuffle": True,
+ },
+ "batch_size": cfg.TRAIN.batch_size,
+ "num_workers": 4,
+ }
+
+ # set constraint
+ sup_constraint = ppsci.constraint.SupervisedConstraint(
+ train_dataloader_cfg,
+ ppsci.loss.MSELoss(),
+ name="Sup",
+ )
+ constraint = {sup_constraint.name: sup_constraint}
+
+ # set iters_per_epoch by dataloader length
+ ITERS_PER_EPOCH = len(sup_constraint.data_loader)
+
+ # set eval dataloader config
+ eval_dataloader_cfg = {
+ "dataset": {
+ "name": "ERA5MeteoDataset",
+ "file_path": cfg.VALID_FILE_PATH,
+ "input_keys": cfg.MODEL.input_keys,
+ "label_keys": cfg.MODEL.output_keys,
+ "training": False,
+ "size": (cfg.IMG_H, cfg.IMG_W),
+ },
+ "batch_size": cfg.EVAL.batch_size,
+ }
+
+ # set validator
+ sup_validator = ppsci.validate.SupervisedValidator(
+ eval_dataloader_cfg,
+ ppsci.loss.MSELoss(),
+ metric={
+ "MAE": ppsci.metric.MAE(keep_batch=True),
+ "MSE": ppsci.metric.MSE(keep_batch=True),
+ },
+ name="Sup_Validator",
+ )
+ validator = {sup_validator.name: sup_validator}
+
+ # set model
+ model = ppsci.arch.Meteoformer(**cfg.MODEL)
+
+ # init optimizer and lr scheduler
+ lr_scheduler_cfg = dict(cfg.TRAIN.lr_scheduler)
+ lr_scheduler_cfg.update({"iters_per_epoch": ITERS_PER_EPOCH})
+ lr_scheduler = ppsci.optimizer.lr_scheduler.Cosine(**lr_scheduler_cfg)()
+
+ optimizer = ppsci.optimizer.Adam(lr_scheduler)(model)
+
+ # initialize solver
+ solver = ppsci.solver.Solver(
+ model=model,
+ constraint=constraint,
+ output_dir=cfg.output_dir,
+ optimizer=optimizer,
+ epochs=cfg.TRAIN.epochs,
+ iters_per_epoch=ITERS_PER_EPOCH,
+ log_freq=cfg.log_freq,
+ eval_during_train=cfg.TRAIN.eval_during_train,
+ eval_freq=cfg.TRAIN.eval_freq,
+ validator=validator,
+ compute_metric_by_batch=cfg.EVAL.compute_metric_by_batch,
+ eval_with_no_grad=cfg.EVAL.eval_with_no_grad,
+ )
+ # train model
+ solver.train()
+ # evaluate after finished training
+ solver.eval()
+
+
+def evaluate(cfg: DictConfig):
+ # set eval dataloader config
+ eval_dataloader_cfg = {
+ "dataset": {
+ "name": "ERA5MeteoDataset",
+ "file_path": cfg.VALID_FILE_PATH,
+ "input_keys": cfg.MODEL.input_keys,
+ "label_keys": cfg.MODEL.output_keys,
+ "training": False,
+ "size": (cfg.IMG_H, cfg.IMG_W),
+ },
+ "batch_size": cfg.EVAL.batch_size,
+ }
+
+ # set validator
+ sup_validator = ppsci.validate.SupervisedValidator(
+ eval_dataloader_cfg,
+ ppsci.loss.MSELoss(),
+ metric={
+ "MAE": ppsci.metric.MAE(keep_batch=True),
+ "MSE": ppsci.metric.MSE(keep_batch=True),
+ },
+ name="Sup_Validator",
+ )
+ validator = {sup_validator.name: sup_validator}
+
+ # set model
+ model = ppsci.arch.Meteoformer(**cfg.MODEL)
+
+ # initialize solver
+ solver = ppsci.solver.Solver(
+ model,
+ output_dir=cfg.output_dir,
+ log_freq=cfg.log_freq,
+ validator=validator,
+ pretrained_model_path=cfg.EVAL.pretrained_model_path,
+ compute_metric_by_batch=cfg.EVAL.compute_metric_by_batch,
+ eval_with_no_grad=cfg.EVAL.eval_with_no_grad,
+ )
+ # evaluate
+ solver.eval()
+
+
+@hydra.main(version_base=None, config_path="./conf", config_name="meteoformer.yaml")
+def main(cfg: DictConfig):
+ if cfg.mode == "train":
+ train(cfg)
+ elif cfg.mode == "eval":
+ evaluate(cfg)
+ else:
+ raise ValueError(f"cfg.mode should in ['train', 'eval'], but got '{cfg.mode}'")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/meteoformer/utils.py b/examples/meteoformer/utils.py
new file mode 100644
index 000000000..e848a0e66
--- /dev/null
+++ b/examples/meteoformer/utils.py
@@ -0,0 +1,22 @@
+from datetime import datetime
+from typing import Tuple
+
+import xarray as xr
+
+
+def date_to_hours(date: str):
+ date_obj = datetime.strptime(date, "%Y-%m-%d %H:%M:%S")
+ day_of_year = date_obj.timetuple().tm_yday - 1
+ hour_of_day = date_obj.timetuple().tm_hour
+ hours_since_jan_01_epoch = 24 * day_of_year + hour_of_day
+ return hours_since_jan_01_epoch
+
+
+def get_mean_std(mean_path: str, std_path: str, vars_channel: Tuple[int, ...] = None):
+ data_mean = xr.open_mfdataset(mean_path)["mean"].values
+ data_std = xr.open_mfdataset(std_path)["std"].values
+
+ data_mean.resize(data_mean.shape[0], 1, 1)
+ data_std.resize(data_std.shape[0], 1, 1)
+
+ return data_mean, data_std
diff --git a/mkdocs.yml b/mkdocs.yml
index cc748c4eb..198af802b 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -107,6 +107,7 @@ nav:
- DGMR: zh/examples/dgmr.md
- STAFNet: zh/examples/stafnet.md
- EarthFormer: zh/examples/earthformer.md
+ - MeteoFormer: zh/examples/meteoformer.md
- Preformer: zh/examples/preformer.md
- ClimateFormer: zh/examples/climateformer.md
- GraphCast: zh/examples/graphcast.md
diff --git a/ppsci/arch/__init__.py b/ppsci/arch/__init__.py
index ceb5b73f4..ae05f09f0 100644
--- a/ppsci/arch/__init__.py
+++ b/ppsci/arch/__init__.py
@@ -67,6 +67,7 @@
from ppsci.arch.regpointnet import RegPointNet # isort:skip
from ppsci.arch.ifm_mlp import IFMMLP # isort:skip
from ppsci.arch.stafnet import STAFNet # isort:skip
+from ppsci.arch.meteoformer import Meteoformer # isort:skip
from ppsci.arch.climateformer import Climateformer # isort:skip
from ppsci.arch.chemprop_molecule import MoleculeModel # isort:skip
from ppsci.arch.preformer import Preformer # isort:skip
@@ -130,6 +131,7 @@
"RegPointNet",
"IFMMLP",
"STAFNet",
+ "Meteoformer",
"Climateformer",
"MoleculeModel",
"Preformer",
diff --git a/ppsci/arch/meteoformer.py b/ppsci/arch/meteoformer.py
new file mode 100644
index 000000000..0ad34979f
--- /dev/null
+++ b/ppsci/arch/meteoformer.py
@@ -0,0 +1,435 @@
+from typing import Optional
+from typing import Tuple
+
+import numpy as np
+from paddle import nn
+
+from ppsci.arch import base
+
+
+def stride_generator(N, reverse=False):
+ strides = [1, 2] * 10
+ if reverse:
+ return list(reversed(strides[:N]))
+ else:
+ return strides[:N]
+
+
+class ConvSC(nn.Layer):
+ def __init__(self, C_in: int, C_out: int, stride: int, transpose: bool = False):
+ super(ConvSC, self).__init__()
+ if stride == 1:
+ transpose = False
+ if not transpose:
+ self.conv = nn.Conv2D(
+ C_in,
+ C_out,
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ weight_attr=nn.initializer.KaimingNormal(),
+ )
+ else:
+ self.conv = nn.Conv2DTranspose(
+ C_in,
+ C_out,
+ kernel_size=3,
+ stride=stride,
+ padding=1,
+ output_padding=stride // 2,
+ weight_attr=nn.initializer.KaimingNormal(),
+ )
+ self.norm = nn.GroupNorm(2, C_out)
+ self.act = nn.LeakyReLU(0.2)
+
+ def forward(self, x):
+ y = self.conv(x)
+ y = self.act(self.norm(y))
+ return y
+
+
+class OverlapPatchEmbed(nn.Layer):
+ """Image to Patch Embedding"""
+
+ def __init__(
+ self,
+ img_size: int = 224,
+ patch_size: int = 7,
+ stride: int = 4,
+ in_chans: int = 3,
+ embed_dim: int = 768,
+ ):
+ super().__init__()
+ img_size = (img_size, img_size)
+ patch_size = (patch_size, patch_size)
+
+ self.img_size = img_size
+ self.patch_size = patch_size
+ self.H, self.W = img_size[0] // patch_size[0], img_size[1] // patch_size[1]
+ self.num_patches = self.H * self.W
+ self.proj = nn.Conv2D(
+ in_chans,
+ embed_dim,
+ kernel_size=patch_size,
+ stride=stride,
+ padding=(patch_size[0] // 2, patch_size[1] // 2),
+ )
+ self.norm = nn.LayerNorm(embed_dim)
+
+ def forward(self, x):
+ x = self.proj(x)
+ _, _, H, W = x.shape
+ x = x.flatten(2).transpose(perm=[0, 2, 1])
+ x = self.norm(x)
+
+ return x, H, W
+
+
+class DWConv(nn.Layer):
+ def __init__(self, dim: int = 768):
+ super(DWConv, self).__init__()
+ self.dwconv = nn.Conv2D(dim, dim, 3, 1, 1, groups=dim)
+
+ def forward(self, x, H, W):
+ B, N, C = x.shape
+ x = x.transpose(perm=[0, 2, 1]).reshape([B, C, H, W])
+ x = self.dwconv(x)
+ x = x.flatten(2).transpose(perm=[0, 2, 1])
+
+ return x
+
+
+class Mlp(nn.Layer):
+ def __init__(
+ self,
+ in_features: int,
+ hidden_features: Optional[int] = None,
+ out_features: Optional[int] = None,
+ act_layer: nn.Layer = nn.GELU,
+ drop: float = 0.0,
+ ):
+ super().__init__()
+ out_features = out_features or in_features
+ hidden_features = hidden_features or in_features
+ self.fc1 = nn.Linear(in_features, hidden_features)
+ self.dwconv = DWConv(hidden_features)
+ self.act = act_layer()
+ self.fc2 = nn.Linear(hidden_features, out_features)
+ self.drop = nn.Dropout(drop)
+
+ def forward(self, x, H, W):
+ x = self.fc1(x)
+ x = self.dwconv(x, H, W)
+ x = self.act(x)
+ x = self.drop(x)
+ x = self.fc2(x)
+ x = self.drop(x)
+ return x
+
+
+class Attention(nn.Layer):
+ def __init__(
+ self,
+ dim: int,
+ num_heads: int = 8,
+ qkv_bias: Optional[int] = None,
+ qk_scale: Optional[int] = None,
+ attn_drop: float = 0.0,
+ proj_drop: float = 0.0,
+ sr_ratio: float = 1.0,
+ ):
+ super().__init__()
+ assert (
+ dim % num_heads == 0
+ ), f"dim {dim} should be divided by num_heads {num_heads}."
+
+ self.dim = dim
+ self.num_heads = num_heads
+ head_dim = dim // num_heads
+ self.scale = qk_scale or head_dim**-0.5
+
+ self.q = nn.Linear(dim, dim, bias_attr=qkv_bias)
+ self.kv = nn.Linear(dim, dim * 2, bias_attr=qkv_bias)
+ self.attn_drop = nn.Dropout(attn_drop)
+ self.proj = nn.Linear(dim, dim)
+ self.proj_drop = nn.Dropout(proj_drop)
+ self.softmax = nn.Softmax(axis=-1)
+
+ self.sr_ratio = sr_ratio
+ if sr_ratio > 1:
+ self.sr = nn.Conv2D(dim, dim, kernel_size=sr_ratio, stride=sr_ratio)
+ self.norm = nn.LayerNorm(dim)
+
+ def forward(self, x, H, W):
+ B, N, C = x.shape
+ q = (
+ self.q(x)
+ .reshape([B, N, self.num_heads, C // self.num_heads])
+ .transpose(perm=[0, 2, 1, 3])
+ )
+
+ if self.sr_ratio > 1:
+ x_ = x.transpose(perm=[0, 2, 1]).reshape([B, C, H, W])
+ x_ = self.sr(x_).reshape([B, C, -1]).transpose(perm=[0, 2, 1])
+ x_ = self.norm(x_)
+ kv = (
+ self.kv(x_)
+ .reshape([B, -1, 2, self.num_heads, C // self.num_heads])
+ .transpose(perm=[2, 0, 3, 1, 4])
+ )
+ else:
+ kv = (
+ self.kv(x)
+ .reshape([B, -1, 2, self.num_heads, C // self.num_heads])
+ .transpose(perm=[2, 0, 3, 1, 4])
+ )
+ k, v = kv[0], kv[1]
+
+ attn = (q @ k.transpose(perm=[0, 1, 3, 2])) * self.scale
+ attn = self.softmax(attn)
+ attn = self.attn_drop(attn)
+
+ x = (attn @ v).transpose(perm=[0, 2, 1, 3]).reshape([B, N, C])
+ x = self.norm(x)
+ x = self.proj(x)
+ x = self.proj_drop(x)
+
+ return x
+
+
+class Block(nn.Layer):
+ def __init__(
+ self,
+ dim: int,
+ num_heads: int,
+ mlp_ratio: float = 4.0,
+ qkv_bias: Optional[int] = None,
+ qk_scale: Optional[int] = None,
+ drop: float = 0.0,
+ attn_drop: float = 0.0,
+ drop_path: float = 0.0,
+ act_layer: nn.Layer = nn.GELU,
+ norm_layer: nn.Layer = nn.LayerNorm,
+ sr_ratio: float = 1.0,
+ ):
+ super().__init__()
+ self.norm1 = norm_layer(dim)
+ self.attn = Attention(
+ dim,
+ num_heads=num_heads,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ attn_drop=attn_drop,
+ proj_drop=drop,
+ sr_ratio=sr_ratio,
+ )
+ self.drop_path = nn.Identity()
+ self.norm2 = norm_layer(dim)
+ mlp_hidden_dim = int(dim * mlp_ratio)
+ self.mlp = Mlp(
+ in_features=dim,
+ hidden_features=mlp_hidden_dim,
+ act_layer=act_layer,
+ drop=drop,
+ )
+
+ def forward(self, x, H, W):
+ x = x + self.drop_path(self.attn(self.norm1(x), H, W))
+ x = x + self.drop_path(self.mlp(self.norm2(x), H, W))
+
+ return x
+
+
+class Encoder(nn.Layer):
+ def __init__(self, C_in: int, C_hid: int, N_S: int):
+ super().__init__()
+ strides = stride_generator(N_S)
+
+ self.enc0 = ConvSC(C_in, C_hid, stride=strides[0])
+ self.enc1 = OverlapPatchEmbed(
+ img_size=256, patch_size=7, stride=4, in_chans=C_hid, embed_dim=C_hid
+ )
+ self.enc2 = Block(
+ dim=C_hid,
+ num_heads=1,
+ mlp_ratio=4,
+ qkv_bias=None,
+ qk_scale=None,
+ drop=0.0,
+ attn_drop=0.0,
+ drop_path=0.0,
+ norm_layer=nn.LayerNorm,
+ sr_ratio=8,
+ )
+ self.norm1 = nn.LayerNorm(C_hid)
+
+ def forward(self, x):
+ B = x.shape[0]
+ latent = []
+ x = self.enc0(x)
+ latent.append(x)
+ x, H, W = self.enc1(x)
+ x = self.enc2(x, H, W)
+ x = self.norm1(x)
+ x = x.reshape([B, H, W, -1]).transpose(perm=[0, 3, 1, 2]).contiguous()
+ latent.append(x)
+
+ return latent
+
+
+class MidXnet(nn.Layer):
+ def __init__(
+ self,
+ channel_in: int,
+ channel_hid: int,
+ N_T: int,
+ incep_ker: Tuple[int, ...] = (3, 5, 7, 11),
+ groups: int = 8,
+ ):
+ super().__init__()
+
+ self.N_T = N_T
+ dpr = [x.item() for x in np.linspace(0, 0.1, N_T)]
+ enc_layers = []
+ for i in range(N_T):
+ enc_layers.append(
+ Block(
+ dim=channel_in,
+ num_heads=4,
+ mlp_ratio=4,
+ qkv_bias=None,
+ qk_scale=None,
+ drop=0.0,
+ attn_drop=0.0,
+ drop_path=dpr[i],
+ norm_layer=nn.LayerNorm,
+ sr_ratio=8,
+ )
+ )
+
+ self.enc = nn.Sequential(*enc_layers)
+
+ def forward(self, x):
+ B, T, C, H, W = x.shape
+ # B TC H W
+
+ x = x.reshape([B, T * C, H, W])
+ # B HW TC
+ x = x.flatten(2).transpose(perm=[0, 2, 1])
+
+ # encoder
+ z = x
+ for i in range(self.N_T):
+ z = self.enc[i](z, H, W)
+
+ return z
+
+
+# MultiDecoder
+class Decoder(nn.Layer):
+ def __init__(self, C_hid: int, C_out: int, N_S: int):
+ super().__init__()
+ strides = stride_generator(N_S, reverse=True)
+
+ self.dec = nn.Sequential(
+ *[ConvSC(C_hid, C_hid, stride=s, transpose=True) for s in strides[:-1]],
+ ConvSC(C_hid, C_hid, stride=strides[-1], transpose=True),
+ )
+ self.readout = nn.Conv2D(C_hid, C_out, 1)
+
+ def forward(self, hid, enc1=None):
+ for i in range(0, len(self.dec)):
+ hid = self.dec[i](hid)
+ Y = self.readout(hid)
+ return Y
+
+
+class Meteoformer(base.Arch):
+ """
+ Meteoformer is a class that represents a Spatial-Temporal Transformer model designed for short-to-medium-term weather prediction with multiple meteorological variables.
+
+ Args:
+ input_keys (Tuple[str, ...]): A tuple of input keys.
+ output_keys (Tuple[str, ...]): A tuple of output keys.
+ shape_in (Tuple[int, ...]): The shape of the input data (T, C, H, W), where
+ T is the number of time steps, C is the number of channels,
+ H and W are the spatial dimensions.
+ hid_S (int): The number of hidden channels in the spatial encoder.
+ hid_T (int): The number of hidden units in the temporal encoder.
+ N_S (int): The number of spatial transformer layers.
+ N_T (int): The number of temporal transformer layers.
+ incep_ker (Tuple[int, ...]): The kernel sizes used in the inception block.
+ groups (int): The number of groups for grouped convolutions.
+ num_classes (int): The number of predicted meteorological variables.
+
+ Examples:
+ >>> import paddle
+ >>> import ppsci
+ >>> model = ppsci.arch.Meteoformer(
+ ... input_keys=("input",),
+ ... output_keys=("output",),
+ ... shape_in=(6, 12, 192, 256),
+ ... hid_S=64,
+ ... hid_T=256,
+ ... N_S=4,
+ ... N_T=4,
+ ... incep_ker=(3, 5, 7, 11),
+ ... groups=8,
+ ... num_classes=4,
+ ... )
+ >>> input_dict = {"input": paddle.rand([8, 6, 12, 192, 256])}
+ >>> output_dict = model(input_dict)
+ >>> print(output_dict["output"].shape)
+ [8, 6, 12, 192, 256]
+ """
+
+ def __init__(
+ self,
+ input_keys: Tuple[str, ...],
+ output_keys: Tuple[str, ...],
+ shape_in: Tuple[int, ...],
+ hid_S: int = 64,
+ hid_T: int = 256,
+ N_S: int = 4,
+ N_T: int = 4,
+ incep_ker: Tuple[int, ...] = (3, 5, 7, 11),
+ groups: int = 8,
+ num_classes: int = 12,
+ ):
+ super().__init__()
+ self.input_keys = input_keys
+ self.output_keys = output_keys
+ self.num_classes = num_classes
+
+ T, C, H, W = shape_in
+ self.enc = Encoder(C, hid_S, N_S)
+ self.hid1 = MidXnet(T * hid_S, hid_T // 2, N_T, incep_ker, groups)
+ self.dec = Decoder(T * hid_S, T * self.num_classes, N_S)
+
+ def forward(self, x):
+ if self._input_transform is not None:
+ x = self._input_transform(x)
+
+ x = self.concat_to_tensor(x, self.input_keys)
+
+ B, T, C, H, W = x.shape
+ x = x.reshape([B * T, C, H, W])
+
+ # encoded
+ embed = self.enc(x)
+ _, C_4, H_4, W_4 = embed[-1].shape
+
+ # translator
+ z = embed[-1].reshape([B, T, C_4, H_4, W_4])
+ hid = self.hid1(z)
+ hid = hid.transpose(perm=[0, 2, 1]).reshape([B, -1, H_4, W_4])
+
+ # decoded
+ y = self.dec(hid, embed[0])
+ y = y.reshape([B, T, self.num_classes, H, W])
+
+ y = self.split_to_dict(y, self.output_keys)
+ if self._output_transform is not None:
+ y = self._output_transform(x, y)
+
+ return y # {self.output_keys[0]: Y}
diff --git a/ppsci/data/dataset/__init__.py b/ppsci/data/dataset/__init__.py
index 8a56f3e22..bb286185d 100644
--- a/ppsci/data/dataset/__init__.py
+++ b/ppsci/data/dataset/__init__.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
+# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -38,6 +38,7 @@
from ppsci.data.dataset.era5_dataset import ERA5Dataset
from ppsci.data.dataset.era5_dataset import ERA5SampledDataset
from ppsci.data.dataset.era5climate_dataset import ERA5ClimateDataset
+from ppsci.data.dataset.era5meteo_dataset import ERA5MeteoDataset
from ppsci.data.dataset.era5sq_dataset import ERA5SQDataset
from ppsci.data.dataset.ext_moe_enso_dataset import ExtMoEENSODataset
from ppsci.data.dataset.fwi_dataset import FWIDataset
@@ -108,6 +109,7 @@
"STAFNetDataset",
"TMTDataset",
"register_to_dataset",
+ "ERA5MeteoDataset",
"ERA5ClimateDataset",
"LatentNODataset",
"LatentNODataset_time",
diff --git a/ppsci/data/dataset/era5meteo_dataset.py b/ppsci/data/dataset/era5meteo_dataset.py
new file mode 100644
index 000000000..7708bb7d1
--- /dev/null
+++ b/ppsci/data/dataset/era5meteo_dataset.py
@@ -0,0 +1,188 @@
+# Copyright (c) 2025 PaddlePaddle Authors. All Rights Reserved.
+
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+
+# http://www.apache.org/licenses/LICENSE-2.0
+
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import annotations
+
+import datetime
+import numbers
+import os
+import random
+from typing import Dict
+from typing import Optional
+from typing import Tuple
+
+import numpy as np
+import paddle
+
+try:
+ import xarray as xr
+except ModuleNotFoundError:
+ pass
+from paddle import io
+from paddle import vision
+
+
+class ERA5MeteoDataset(io.Dataset):
+ """ERA5 dataset for multi-meteorological-element prediction (r, t, u, v).
+
+ Args:
+ file_path (str): Dataset path (contains .npy files in year folders).
+ input_keys (Tuple[str, ...]): Input dict keys, e.g. ("input",).
+ label_keys (Tuple[str, ...]): Label dict keys, e.g. ("output",).
+ size (Tuple[int, int]): Crop size (height, width).
+ weight_dict (Optional[Dict[str, float]]): Weight dictionary. Defaults to None.
+ transforms (Optional[vision.Compose]): Optional transforms. Defaults to None.
+ training (bool): If in training mode (2016-2018). Else validation mode (2019).
+ stride (int): Stride for sampling. Defaults to 1.
+ sq_length (int): Sequence length for input and output. Defaults to 6.
+ """
+
+ batch_index: bool = False
+
+ def __init__(
+ self,
+ file_path: str,
+ input_keys: Tuple[str, ...],
+ label_keys: Tuple[str, ...],
+ size: Tuple[int, ...],
+ weight_dict: Optional[Dict[str, float]] = None,
+ transforms: Optional[vision.Compose] = None,
+ training: bool = True,
+ stride: int = 1,
+ sq_length: int = 6,
+ ):
+ super().__init__()
+ self.file_path = file_path
+ self.input_keys = input_keys
+ self.label_keys = label_keys
+ self.size = size
+ self.training = training
+ self.sq_length = sq_length
+ self.transforms = transforms
+ self.stride = stride
+
+ mean_file_path = os.path.join(self.file_path, "mean.nc")
+ std_file_path = os.path.join(self.file_path, "std.nc")
+
+ mean_ds = xr.open_dataset(mean_file_path)
+ std_ds = xr.open_dataset(std_file_path)
+
+ self.mean = mean_ds["mean"].values.reshape(-1, 1, 1)
+ self.std = std_ds["std"].values.reshape(-1, 1, 1)
+
+ self.weight_dict = {} if weight_dict is None else weight_dict
+ if weight_dict is not None:
+ self.weight_dict = {key: 1.0 for key in self.label_keys}
+ self.weight_dict.update(weight_dict)
+
+ self.time_table = self._build_time_table()
+
+ def _build_time_table(self):
+ """Build datetime list from available .npy files, filtered by years."""
+ years = sorted([y for y in os.listdir(self.file_path) if y.isdigit()])
+
+ if self.training:
+ target_years = {"2016", "2017", "2018"}
+ else:
+ target_years = {"2016", "2019"}
+
+ time_list = []
+ for y in years:
+ if y not in target_years:
+ continue
+ year_dir = os.path.join(self.file_path, y)
+ files = sorted(os.listdir(year_dir))
+ for fname in files:
+ if fname.startswith("r_") and fname.endswith(".npy"):
+ dt_str = fname[2:12] # YYYYMMDDHH
+ dt = datetime.datetime.strptime(dt_str, "%Y%m%d%H")
+ time_list.append(dt)
+
+ return sorted(time_list)
+
+ def __len__(self):
+ return len(self.time_table) - self.sq_length * 2 + 1
+
+ def __getitem__(self, global_idx):
+ x_list, y_list = [], []
+
+ for m in range(self.sq_length):
+ x_list.append(self.load_data(global_idx + m))
+
+ for n in range(self.sq_length):
+ y_list.append(self.load_data(global_idx + self.sq_length + n))
+
+ x = np.stack(x_list, axis=0)
+ y = np.stack(y_list, axis=0)
+
+ # Normalize
+ x = (x - self.mean) / self.std
+ y = (y - self.mean) / self.std
+
+ x, y = self._random_crop(x, y)
+
+ input_item = {self.input_keys[0]: x}
+ label_item = {self.label_keys[0]: y}
+
+ weight_shape = [1] * len(next(iter(label_item.values())).shape)
+ weight_item = {
+ key: np.full(weight_shape, value, paddle.get_default_dtype())
+ for key, value in self.weight_dict.items()
+ }
+
+ if self.transforms is not None:
+ input_item, label_item, weight_item = self.transforms(
+ input_item, label_item, weight_item
+ )
+
+ return input_item, label_item, weight_item
+
+ def load_data(self, indices):
+ """Load r, t, u, v for a given index."""
+ dt = self.time_table[indices]
+ year = f"{dt.year:04d}"
+ mon = f"{dt.month:02d}"
+ day = f"{dt.day:02d}"
+ hour = f"{dt.hour:02d}"
+
+ r_data = np.load(
+ os.path.join(self.file_path, year, f"r_{year}{mon}{day}{hour}.npy")
+ )
+ t_data = np.load(
+ os.path.join(self.file_path, year, f"t_{year}{mon}{day}{hour}.npy")
+ )
+ u_data = np.load(
+ os.path.join(self.file_path, year, f"u_{year}{mon}{day}{hour}.npy")
+ )
+ v_data = np.load(
+ os.path.join(self.file_path, year, f"v_{year}{mon}{day}{hour}.npy")
+ )
+
+ data = np.concatenate([r_data, t_data, u_data, v_data])
+ return data
+
+ def _random_crop(self, x, y):
+ if isinstance(self.size, numbers.Number):
+ self.size = (int(self.size), int(self.size))
+
+ th, tw = self.size
+ h, w = y.shape[-2], y.shape[-1]
+
+ x1 = random.randint(0, w - tw)
+ y1 = random.randint(0, h - th)
+
+ x_cropped = x[..., y1 : y1 + th, x1 : x1 + tw]
+ y_cropped = y[..., y1 : y1 + th, x1 : x1 + tw]
+
+ return x_cropped, y_cropped