티스토리 뷰

Research (연구 관련)

R1 regularization

홍돌 2024. 2. 27. 12:05

09.18.2024

Gradient Penalty라고도 불리나 봄.

 

Applying a gradient penalty in GAN training (especially in WGAN-GP) helps stabilize the training process by enforcing the Lipschitz continuity condition. This regularizes the discriminator’s gradients, ensuring that they remain well-behaved, which in turn helps the generator receive more meaningful updates. It addresses critical issues like mode collapse, vanishing/exploding gradients, and the problems with weight clipping in GANs.

from AMP (2021 SIGGRAPH) paper

 


글보다 해석이 훨씬 직관적이네. 3D-aware GAN 읽다보니 자주 표현하는 텀이었음.

https://ai.stackexchange.com/questions/25458/can-someone-explain-r1-regularization-function-in-simple-terms

 

Can someone explain R1 regularization function in simple terms?

I'm trying to understand the R1 regularization function, both the abstract concept and every symbol in the formula. According to the article, the definition of R1 is: It penalizes the discriminato...

ai.stackexchange.com

'Research (연구 관련)' 카테고리의 다른 글

What are covariance and correlation?  (0) 2024.03.01
Norm  (0) 2024.02.27
Image Processing II  (0) 2024.02.27
Lambertian reflection  (0) 2024.02.23
Intro to CUDA  (0) 2024.02.19
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/11   »
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
글 보관함