暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

​​Denoising Diffusion Is All You Need

733

From Graph Machine Learning Telegram Group


The breakthrough on Denoising Diffusion Probabilistic Models (DDPM) happened about 2 years ago. Since then, we observe dramatic improvement in generation tasks: GLIDE, DALL-E 2, and recent Imagen for images, Diffusion-LM in language modeling, diffusion for video sequences, and even diffusion for reinforcement learning.


Diffusion might be the biggest trend in GraphML in 2022 - particularly when applied to drug discovery, molecules and conformers generation, and quantum chemistry in general. Often, they are paired with the latest advancements in equivariant GNNs. Recent cool works that you’d want to take a look at include:


- Equivariant Diffusion for Molecule Generation in 3D (Hoogeboom et al, ICML 2022)

- Generative Coarse-Graining of Molecular Conformations (Wang et al, ICML 2022)

- GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation (Xu et al, ICLR 2022)

- Torsional Diffusion for Molecular Conformer Generation (Jing and Corso et al, 2022)

Where to learn more about DDPMs and its (quite advanced) mathematics? Luckily, there is a good bunch of new educational blog posts with step-by-step illustrations of the diffusion process and its implementation - try it! 

- The Annotated Diffusion Model by Niels Rogge and Kashif Rasul (HuggingFace)

- Improving Diffusion Models as an Alternative To GANs by Arash Vahdat and Karsten Kreis (NVIDIA)

- What are Diffusion Models by Lilian Weng (OpenAI)

文章转载自深度学习与图网络,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论