What is VLM?
What is VLM (Vision Language Model)?VLM is a model with a multi-modal architecture that learns to associate information from image and text modalities. The focus of the multi-modal learning is to pre-train an model on vision and language task and improve the downstream task performance such as VQA (Vision Question Answering). Why VLM? What are the Use Cases?1. Image Search and Retrieval / 2. Rob..
Research (연구 관련)
2024. 5. 9. 07:00
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- Machine Learning
- nohup
- pytorch
- 인터뷰
- densepose
- 헬스
- Docker
- 에디톨로지
- 컴퓨터비젼
- part segmentation
- Transformation
- world coordinate
- pyrender
- 컴퓨터비전
- Interview
- demo
- Virtual Camera
- focal length
- VAE
- deep learning
- camera coordinate
- Generative model
- 문경식
- Pose2Mesh
- spin
- 비전
- nerf
- 피트니스
- 2d pose
- 머신러닝
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
글 보관함