From 49434b62dcdc6e96786a0477d3d23bcc719cf7e7 Mon Sep 17 00:00:00 2001 From: Yihang Qiu <78300377+GihhArwtw@users.noreply.github.com> Date: Sat, 15 Mar 2025 18:42:26 +0800 Subject: [PATCH 1/3] AD projects updated --- README.md | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/README.md b/README.md index 126fd48..331fd00 100644 --- a/README.md +++ b/README.md @@ -618,17 +618,23 @@ CS231n (斯坦福计算机视觉课程): [website](https://cs231n.stanford.edu/s 1. 3D/4D 场景重建 -* 经典论文:NSG, MARS, StreetGaussians - * https://openaccess.thecvf.com/content/CVPR2021/html/Ost_Neural_Scene_Graphs_for_Dynamic_Scenes_CVPR_2021_paper.html - * https://arxiv.org/abs/2307.15058 - * https://arxiv.org/abs/2401.01339 +* 经典工作:NSG, MARS, StreetGaussians, OmniRe + * NSG: CVPR 2021, [github](https://github.com/princeton-computational-imaging/neural-scene-graphs), [arxiv](https://arxiv.org/abs/2011.10379), [paper](https://openaccess.thecvf.com/content/CVPR2021/html/Ost_Neural_Scene_Graphs_for_Dynamic_Scenes_CVPR_2021_paper.html) + * MARS: [github](https://open-air-sun.github.io/mars/), [arxiv](https://arxiv.org/abs/2307.15058) + * StreetGaussians: [github](https://github.com/zju3dv/street_gaussians), [arxiv](https://arxiv.org/abs/2401.01339) + * OmniRe: ICLR 2025 Spotlight, [demo page](https://ziyc.github.io/omnire), [github](https://github.com/ziyc/drivestudio), [arxiv](https://arxiv.org/abs/2408.16760) 2. 场景可控生成(世界模型) -* 经典论文:MagicDrive -> MagicDriveDiT, SCP-Diff, UniScene - * https://arxiv.org/abs/2411.13807 - * https://arxiv.org/abs/2403.09638 - * https://arxiv.org/abs/2412.05435 +* 经典工作:GAIA-1, GenAD(OpenDV数据集), Vista, SCP-Diff, MagicDrive -> MagicDriveDiT, UniScene, VaVAM + * GAIA-1: [demo page](https://wayve.ai/thinking/introducing-gaia1/), [arxiv](https://arxiv.org/abs/2309.17080) + * GenAD: CVPR 2024 Highlight, OpenDV数据集, [github](https://github.com/OpenDriveLab/DriveAGI?tab=readme-ov-file#opendv), [arxiv](https://arxiv.org/abs/2403.09630) + * Vista: NeurIPS 2025, [demo page](https://opendrivelab.com/Vista), [github](https://github.com/OpenDriveLab/Vista), [arxiv](https://arxiv.org/abs/2405.17398) + * SCP-Diff: [demo page](https://air-discover.github.io/SCP-Diff/), [github](https://github.com/AIR-DISCOVER/SCP-Diff-Toolkit), [arxiv](https://arxiv.org/abs/2403.09638) + * MagicDrive -> MagicDriveDiT: [demo page](https://gaoruiyuan.com/magicdrive-v2/), [arxiv](https://arxiv.org/abs/2411.13807) + * UniScene: CVPR 2025, [demo page](https://arlo0o.github.io/uniscene/), [arxiv](https://arxiv.org/abs/2412.05435) + * VaVAM: [github](https://github.com/valeoai/VideoActionModel) + #### Policy:自动驾驶策略 @@ -643,14 +649,14 @@ CS231n (斯坦福计算机视觉课程): [website](https://cs231n.stanford.edu/s [理想端到端-VLM双系统](https://www.sohu.com/a/801987742_258768) * 快系统经典论文:UniAD (CVPR 2023 Best Paper), VAD, SparseDrive, DiffusionDrive - * https://arxiv.org/abs/2212.10156 - * https://arxiv.org/abs/2303.12077 - * https://arxiv.org/abs/2405.19620 - * https://arxiv.org/abs/2411.15139 + * UniAD: CVPR 2023 Best Paper, [github](https://github.com/OpenDriveLab/UniAD), [arxiv](https://arxiv.org/abs/2212.10156) + * VAD: ICCV 2023, [github](https://github.com/hustvl/VAD), [arxiv](https://arxiv.org/abs/2303.12077) + * SparseDrive: [github](https://github.com/swc-17/SparseDrive), [arxiv](https://arxiv.org/abs/2405.19620) + * DiffusionDrive: CVPR 2025, [github](https://github.com/hustvl/DiffusionDrive), [arxiv](https://arxiv.org/abs/2411.15139) * 快系统的 Scale up 特性探究:https://arxiv.org/pdf/2412.02689 * 慢系统经典论文:DriveVLM, EMMA - * https://arxiv.org/abs/2402.12289 - * https://arxiv.org/abs/2410.23262 + * DriveVLM: CoRL 2024, [arxiv](https://arxiv.org/abs/2402.12289) + * EMMA: [arxiv](https://arxiv.org/abs/2410.23262) - **[Open-EMMA](https://github.com/taco-group/OpenEMMA)** 是EMMA的一个开源实现,提供了一个用于自动驾驶车辆运动规划的端到端框架。 From 827469921bbc534b49a073ef57d6318a9041a6f4 Mon Sep 17 00:00:00 2001 From: Yihang Qiu <78300377+GihhArwtw@users.noreply.github.com> Date: Sat, 15 Mar 2025 18:43:52 +0800 Subject: [PATCH 2/3] format update --- README.md | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/README.md b/README.md index 331fd00..cf54777 100644 --- a/README.md +++ b/README.md @@ -619,21 +619,21 @@ CS231n (斯坦福计算机视觉课程): [website](https://cs231n.stanford.edu/s 1. 3D/4D 场景重建 * 经典工作:NSG, MARS, StreetGaussians, OmniRe - * NSG: CVPR 2021, [github](https://github.com/princeton-computational-imaging/neural-scene-graphs), [arxiv](https://arxiv.org/abs/2011.10379), [paper](https://openaccess.thecvf.com/content/CVPR2021/html/Ost_Neural_Scene_Graphs_for_Dynamic_Scenes_CVPR_2021_paper.html) - * MARS: [github](https://open-air-sun.github.io/mars/), [arxiv](https://arxiv.org/abs/2307.15058) - * StreetGaussians: [github](https://github.com/zju3dv/street_gaussians), [arxiv](https://arxiv.org/abs/2401.01339) - * OmniRe: ICLR 2025 Spotlight, [demo page](https://ziyc.github.io/omnire), [github](https://github.com/ziyc/drivestudio), [arxiv](https://arxiv.org/abs/2408.16760) + * **NSG**: CVPR 2021, [github](https://github.com/princeton-computational-imaging/neural-scene-graphs), [arxiv](https://arxiv.org/abs/2011.10379), [paper](https://openaccess.thecvf.com/content/CVPR2021/html/Ost_Neural_Scene_Graphs_for_Dynamic_Scenes_CVPR_2021_paper.html) + * **MARS**: [github](https://open-air-sun.github.io/mars/), [arxiv](https://arxiv.org/abs/2307.15058) + * **StreetGaussians**: [github](https://github.com/zju3dv/street_gaussians), [arxiv](https://arxiv.org/abs/2401.01339) + * **OmniRe**: ICLR 2025 Spotlight, [demo page](https://ziyc.github.io/omnire), [github](https://github.com/ziyc/drivestudio), [arxiv](https://arxiv.org/abs/2408.16760) 2. 场景可控生成(世界模型) * 经典工作:GAIA-1, GenAD(OpenDV数据集), Vista, SCP-Diff, MagicDrive -> MagicDriveDiT, UniScene, VaVAM - * GAIA-1: [demo page](https://wayve.ai/thinking/introducing-gaia1/), [arxiv](https://arxiv.org/abs/2309.17080) - * GenAD: CVPR 2024 Highlight, OpenDV数据集, [github](https://github.com/OpenDriveLab/DriveAGI?tab=readme-ov-file#opendv), [arxiv](https://arxiv.org/abs/2403.09630) - * Vista: NeurIPS 2025, [demo page](https://opendrivelab.com/Vista), [github](https://github.com/OpenDriveLab/Vista), [arxiv](https://arxiv.org/abs/2405.17398) - * SCP-Diff: [demo page](https://air-discover.github.io/SCP-Diff/), [github](https://github.com/AIR-DISCOVER/SCP-Diff-Toolkit), [arxiv](https://arxiv.org/abs/2403.09638) - * MagicDrive -> MagicDriveDiT: [demo page](https://gaoruiyuan.com/magicdrive-v2/), [arxiv](https://arxiv.org/abs/2411.13807) - * UniScene: CVPR 2025, [demo page](https://arlo0o.github.io/uniscene/), [arxiv](https://arxiv.org/abs/2412.05435) - * VaVAM: [github](https://github.com/valeoai/VideoActionModel) + * **GAIA-1**: [demo page](https://wayve.ai/thinking/introducing-gaia1/), [arxiv](https://arxiv.org/abs/2309.17080) + * **GenAD**: CVPR 2024 Highlight, OpenDV数据集, [github](https://github.com/OpenDriveLab/DriveAGI?tab=readme-ov-file#opendv), [arxiv](https://arxiv.org/abs/2403.09630) + * **Vista**: NeurIPS 2025, [demo page](https://opendrivelab.com/Vista), [github](https://github.com/OpenDriveLab/Vista), [arxiv](https://arxiv.org/abs/2405.17398) + * **SCP-Diff**: [demo page](https://air-discover.github.io/SCP-Diff/), [github](https://github.com/AIR-DISCOVER/SCP-Diff-Toolkit), [arxiv](https://arxiv.org/abs/2403.09638) + * **MagicDrive** -> MagicDriveDiT: [demo page](https://gaoruiyuan.com/magicdrive-v2/), [arxiv](https://arxiv.org/abs/2411.13807) + * **UniScene**: CVPR 2025, [demo page](https://arlo0o.github.io/uniscene/), [arxiv](https://arxiv.org/abs/2412.05435) + * **VaVAM**: [github](https://github.com/valeoai/VideoActionModel) #### Policy:自动驾驶策略 @@ -649,14 +649,14 @@ CS231n (斯坦福计算机视觉课程): [website](https://cs231n.stanford.edu/s [理想端到端-VLM双系统](https://www.sohu.com/a/801987742_258768) * 快系统经典论文:UniAD (CVPR 2023 Best Paper), VAD, SparseDrive, DiffusionDrive - * UniAD: CVPR 2023 Best Paper, [github](https://github.com/OpenDriveLab/UniAD), [arxiv](https://arxiv.org/abs/2212.10156) - * VAD: ICCV 2023, [github](https://github.com/hustvl/VAD), [arxiv](https://arxiv.org/abs/2303.12077) - * SparseDrive: [github](https://github.com/swc-17/SparseDrive), [arxiv](https://arxiv.org/abs/2405.19620) - * DiffusionDrive: CVPR 2025, [github](https://github.com/hustvl/DiffusionDrive), [arxiv](https://arxiv.org/abs/2411.15139) + * **UniAD**: CVPR 2023 Best Paper, [github](https://github.com/OpenDriveLab/UniAD), [arxiv](https://arxiv.org/abs/2212.10156) + * **VAD**: ICCV 2023, [github](https://github.com/hustvl/VAD), [arxiv](https://arxiv.org/abs/2303.12077) + * **SparseDrive**: [github](https://github.com/swc-17/SparseDrive), [arxiv](https://arxiv.org/abs/2405.19620) + * **DiffusionDrive**: CVPR 2025, [github](https://github.com/hustvl/DiffusionDrive), [arxiv](https://arxiv.org/abs/2411.15139) * 快系统的 Scale up 特性探究:https://arxiv.org/pdf/2412.02689 * 慢系统经典论文:DriveVLM, EMMA - * DriveVLM: CoRL 2024, [arxiv](https://arxiv.org/abs/2402.12289) - * EMMA: [arxiv](https://arxiv.org/abs/2410.23262) + * **DriveVLM**: CoRL 2024, [arxiv](https://arxiv.org/abs/2402.12289) + * **EMMA**: [arxiv](https://arxiv.org/abs/2410.23262) - **[Open-EMMA](https://github.com/taco-group/OpenEMMA)** 是EMMA的一个开源实现,提供了一个用于自动驾驶车辆运动规划的端到端框架。 From a35de6179669c5d56d32274581b454f5d6362551 Mon Sep 17 00:00:00 2001 From: Yihang Qiu <78300377+GihhArwtw@users.noreply.github.com> Date: Sat, 15 Mar 2025 18:48:08 +0800 Subject: [PATCH 3/3] Update contributors --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index cf54777..bb17cae 100644 --- a/README.md +++ b/README.md @@ -102,7 +102,7 @@ ## About us - 关于我们 我们是一个由具身初学者组成的团队, 希望能够通过我们自己的学习经验, 为后来者提供一些帮助, 加快具身智能的普及。欢迎更多朋友加入我们的项目, 也很欢迎交友、学术合作, 有任何问题, 可以联系邮箱`chentianxing2002@gmail.com`。 -
🦉Contributors: 陈天行 (深大BS), 王开炫 (25' 港大PhD), 贾越如 (北大Ms), 姚天亮 (25' 港中文PhD), 高焕昂 (清华PhD), 高宁 (西交BS), 郭常青 (清华Ms), 彭时佳 (深大BS), 邹誉德 (25' 上交AILab联培PhD), 陈思翔 (25' 北大PhD), 朱宇飞 (25' 上科大Ms), 韩翊飞 (清华Ms), 王文灏 (宾大Ms), 李卓恒 (港大PhD), 梁升一 (港科广PhD), 林俊晓 (浙大Ms), 王冠锟 (港中文PhD), 吴志杰 (港中文PhD), 叶雯 (25' 中科院PhD), 陈攒鑫 (深大BS), 侯博涵 (山大BS), 江恒乐 (25‘ 南科大PhD), 胡梦康 (港大PhD), 梁志烜 (港大PhD), 穆尧 (上交AP).
+🦉Contributors: 陈天行 (深大BS), 王开炫 (25' 港大PhD), 贾越如 (北大Ms), 姚天亮 (25' 港中文PhD), 高焕昂 (清华PhD), 高宁 (西交BS), 郭常青 (清华Ms), 彭时佳 (深大BS), 邹誉德 (25' 上交AILab联培PhD), 陈思翔 (25' 北大PhD), 朱宇飞 (25' 上科大Ms), 韩翊飞 (清华Ms), 王文灏 (宾大Ms), 李卓恒 (港大PhD), 邱一航 (港大PhD), 梁升一 (港科广PhD), 林俊晓 (浙大Ms), 王冠锟 (港中文PhD), 吴志杰 (港中文PhD), 叶雯 (25' 中科院PhD), 陈攒鑫 (深大BS), 侯博涵 (山大BS), 江恒乐 (25‘ 南科大PhD), 胡梦康 (港大PhD), 梁志烜 (港大PhD), 穆尧 (上交AP).