key technical innovation: introducing separate streams for vision and language processing that communicate through co-attentional transformer layers. why two-stream?
VISUALBERT: A Simple And Performant Baseline For Vision And Language two visually-grounded language model objectives for pre-training: (1) part of the text is masked and the model learns to predict the masked words based on the remaining text and visual context; (2) the model is trained to determine whether the provided text matches the image. We show that such pre-training on image caption data is important for VisualBERT to learn transferable text and visual representations. conduct comprehensive experiments on four vision-and-language tasks:VQA VCR NLVR regionto-phrase grounding
The second work is over.
Unicoder-VL: A Universal Encoder for Vision and Language by Cross-Modal Pre-Training
approach
The third word is over.
LXMERT: Learning Cross-Modality Encoder Representations from Transformers It consists of three Transformer: encoders: an object relationship encoder, a language encoder, and across-modality encoder.pre-train our model with fifive diverse representative tasks: (1) masked crossmodality language modeling (2) masked object prediction via RoI-feature regression (3) masked object prediction via detected-label classifification, (4) cross-modality matching (5) image question answering.
over