What is VLM?
What is VLM (Vision Language Model)?VLM is a model with a multi-modal architecture that learns to associate information from image and text modalities. The focus of the multi-modal learning is to pre-train an model on vision and language task and improve the downstream task performance such as VQA (Vision Question Answering). Why VLM? What are the Use Cases?1. Image Search and Retrieval / 2. Rob..
Research (연구 관련)
2024. 5. 9. 07:00
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- Machine Learning
- pytorch
- Virtual Camera
- 머신러닝
- Generative model
- 에디톨로지
- Interview
- densepose
- 인터뷰
- nerf
- Docker
- 비전
- Transformation
- 컴퓨터비전
- 2d pose
- Pose2Mesh
- 컴퓨터비젼
- 헬스
- world coordinate
- deep learning
- 문경식
- camera coordinate
- focal length
- demo
- pyrender
- 피트니스
- nohup
- VAE
- part segmentation
- spin
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
글 보관함