What is VLM?
What is VLM (Vision Language Model)?VLM is a model with a multi-modal architecture that learns to associate information from image and text modalities. The focus of the multi-modal learning is to pre-train an model on vision and language task and improve the downstream task performance such as VQA (Vision Question Answering). Why VLM? What are the Use Cases?1. Image Search and Retrieval / 2. Rob..
Research (연구 관련)
2024. 5. 9. 07:00
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- pyrender
- camera coordinate
- focal length
- densepose
- spin
- 문경식
- 에디톨로지
- 컴퓨터비젼
- Virtual Camera
- Machine Learning
- 2d pose
- world coordinate
- Pose2Mesh
- deep learning
- nerf
- pytorch
- Generative model
- Interview
- demo
- VAE
- Docker
- 인터뷰
- 머신러닝
- part segmentation
- nohup
- 비전
- 컴퓨터비전
- Transformation
- 헬스
- 피트니스
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
글 보관함