Back

/ 3 min read

Video-to-Image Affordance Grounding

A paper review summary part of my master’s coursework in CSE597: Vision and Language

Affordance Grounding from Demonstration Video to Target Image

Joya Chen, Difei Gao, Kevin Qinghong Lin, Mike Zheng Shou

PDF | Paper Code

Summary

Affordance grounding refers to the process of identifying and localizing the relevant regions or objects in an image or video that are associated with specific human actions or interactions, known as affordances. It involves mapping the affordance regions from a demonstration video to a target image, enabling AI systems to understand and replicate human interactions. This paper presents the Affordance Transformer (Afformer) model, which focuses on grounding affordances from videos to images. It employs a detailed transformer-based decoder to improve the precision of affordance grounding and integrates a self-supervised pre-training method known as Mask Affordance Hand (MaskAHand), which synthesizes video-image data and simulates context changes to enhance affordance grounding across video-image discrepancies. Previous methods struggle with fine-grained affordance grounding, especially when potential affordance regions are closely situated. The multiscale transformer-based decoder improves the generation of affordance heatmaps by gradually decoding the heatmaps at multiple scales (allowing it to capture both global and local contextual information). It explicitly incorporates spatial modeling, making it more suitable for heatmap grounding. It utilizes a cross-attention operation where image encodings serve as the query and video encodings serve as the key and value. This allows for a more accurate determination of whether a location in the target image corresponds to an affordance region. The combination of Afformer with MaskAHand pre-training achieves state-of-the-art performance on multiple benchmarks, including a substantial 37% improvement on the OPRA dataset.

Strengths

The paper thoroughly assesses the model’s performance on three extensive datasets: OPRA, EPIC-Hotspot, and GTEA. The results are highly encouraging, as the authors report that even the most lightweight version of Afformer exhibits a >30% enhancement in fine-grained affordance grounding and >10% improvement in coarse-grained affordance grounding. Despite the challenge of limited training data, I find the authors’ approach with the Afformer model coupled with MaskAHand pretraining to be innovative. Furthermore, the paper supports its claims through extensive ablation studies, comparing different configurations for Afformer and examining the impact of context masks and perspective transformations for MaskAHand.

Possible directions for future work

While the paper stands strong in the claims it makes, a possible shortcoming I believe could be the generalizability of the proposed methods to other domains or datasets. While the paper demonstrates impressive results on the specific video-to-image affordance benchmarks used, it does not thoroughly explore the performance or adaptability of Afformer and MaskAHand on different datasets or real-world scenarios. Additionally, there is no mention of the cost of computation for all experiments conducted or feasibility of this model.

Conclusion

In summary, this paper proposes a novel approach to address the challenge of video-to-image affordance grounding. The model achieves state-of-the-art performance on multiple video-to-image affordance grounding benchmarks, demonstrating its effectiveness in comprehending human interaction across videos and images. The fine-grained decoder and the MaskAHand pre-training technique contribute to improved accuracy and detail in affordance grounding. These advancements can benefit various applications, such as robotics, augmented reality, and human-computer interaction, where understanding human-object interaction is crucial. Further research to assess the generalizability of Afformer and MaskAHand to different domains and datasets would help evaluate the robustness and applicability of the proposed methods in various real-world scenarios. Lastly, investigating the feasibility of deploying this approach in robotic or AR-applications would require optimizing computational efficiency and memory requirements of the models to ensure practical usability.

References

Conference Presentation @ CVPR 2023