OAK

GraspSAM: When Segment Anything Model Meets Grasp Detection

Metadata Downloads
Author(s)
Noh, SangjunKim, JongwonNam, DongwooBack, SeunghyeokKang, RaeyoungLee, Kyoobin
Type
Conference Paper
Citation
2025 IEEE International Conference on Robotics and Automation, ICRA 2025, pp.14023 - 14029
Issued Date
2025-05-23
Abstract
Grasp detection requires flexibility to handle objects of various shapes without relying on prior object knowledge, while also offering intuitive, user-guided control. In this paper, we introduce GraspSAM, an innovative extension of the Segment Anything Model (SAM) designed for prompt-driven and category-agnostic grasp detection. Unlike previous methods, which are often limited by small-scale training data, Grasp-SAM leverages SAM's large-scale training and prompt-based segmentation capabilities to efficiently support both target-object and category-agnostic grasping. By utilizing adapters, learnable token embeddings, and a lightweight modified decoder, GraspSAM requires minimal fine-tuning to integrate object segmentation and grasp prediction into a unified frame-work. Our model achieves state-of-the-art (SOTA) performance across multiple datasets, including Jacquard, Grasp-Anything, and Grasp-Anything++. Extensive experiments demonstrate GraspSAM's flexibility in handling different types of prompts (such as points, boxes, and language), highlighting its robustness and effectiveness in real-world robotic applications. Robot demonstrations, additional results, and code can be found at https://gistailab.github.io/GraspSAM/. © 2025 Elsevier B.V., All rights reserved.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Conference Place
US
Atlanta; GA; Georgia World Congress Center
URI
https://scholar.gist.ac.kr/handle/local/32272
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.