![](http://i1.daumcdn.net/thumb/C148x148.fwebp.q85/?fname=https://blog.kakaocdn.net/dn/o1LuC/btsHiK82BFk/YkN8HvXkyW3CBBFeqBsUkK/img.png)
What is VLM (Vision Language Model)?VLM is a model with a multi-modal architecture that learns to associate information from image and text modalities. The focus of the multi-modal learning is to pre-train an model on vision and language task and improve the downstream task performance such as VQA (Vision Question Answering). Why VLM? What are the Use Cases?1. Image Search and Retrieval / 2. Rob..
Research (연구 관련)
2024. 5. 9. 07:00
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- camera coordinate
- 비전
- world coordinate
- Docker
- 컴퓨터비젼
- Transformation
- 피트니스
- 헬스
- spin
- 2d pose
- 문경식
- Pose2Mesh
- Generative model
- 인터뷰
- deep learning
- Virtual Camera
- pytorch
- 에디톨로지
- VAE
- densepose
- nerf
- Machine Learning
- nohup
- 컴퓨터비전
- part segmentation
- demo
- Interview
- 머신러닝
- pyrender
- focal length
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
글 보관함