티스토리 뷰
09.18.2024
Gradient Penalty라고도 불리나 봄.
Applying a gradient penalty in GAN training (especially in WGAN-GP) helps stabilize the training process by enforcing the Lipschitz continuity condition. This regularizes the discriminator’s gradients, ensuring that they remain well-behaved, which in turn helps the generator receive more meaningful updates. It addresses critical issues like mode collapse, vanishing/exploding gradients, and the problems with weight clipping in GANs.
글보다 해석이 훨씬 직관적이네. 3D-aware GAN 읽다보니 자주 표현하는 텀이었음.
'Research (연구 관련)' 카테고리의 다른 글
What are covariance and correlation? (0) | 2024.03.01 |
---|---|
Norm (0) | 2024.02.27 |
Image Processing II (0) | 2024.02.27 |
Lambertian reflection (0) | 2024.02.23 |
Intro to CUDA (0) | 2024.02.19 |
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- pyrender
- Interview
- 에디톨로지
- 피트니스
- 비전
- 컴퓨터비젼
- nerf
- 인터뷰
- spin
- deep learning
- pytorch
- Generative model
- Pose2Mesh
- world coordinate
- camera coordinate
- 머신러닝
- Docker
- 문경식
- 헬스
- demo
- densepose
- Machine Learning
- VAE
- Transformation
- nohup
- Virtual Camera
- part segmentation
- 컴퓨터비전
- focal length
- 2d pose
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
글 보관함