JoTyMo: Joint Type and Movement Detection from RGB Images of Before and After Interacting with Articulated Objects

Ehsan Forootan, Hamed Ghasemi,Mehdi Tale Masouleh,Ahmad Kalhor,Behzad Moshiri

2023 11th RSI International Conference on Robotics and Mechatronics (ICRoM)(2023)

引用 0|浏览0
暂无评分
摘要
This paper presents a CNN-based network architecture aimed to classify and detect joint types within Articulated objects, specifically the Push-P joints, P-joints, R-joints and objects lacking joints. The movement modeling of the detected objects is explored by processing pre and post interaction RGB images within the SAPIEN PartNet-Mobility dataset. In order to achieve this, an architecture is proposed to leverage consecutive CNN encoders based on the VGG architecture, in order to classify joints based on pre and post-interaction images. Additionally, in order to detect the effect-point and movement vector, a convolutional encoder is applied for each joint type. Moreover, extensive evaluation is made to showcase the results, achieving 96% accuracy in joint classification and 94% accuracy in regression on the considered dataset. Elaborate details regarding network architectures, training procedures, and testing methodologies are available at source code repository.
更多
查看译文
关键词
RGB Images,Joint Movement,Joint Type,Articulated Objects,Movement Vector,Mean Square Error,Deep Learning,Support Vector Machine,Random Forest,Convolutional Layers,Test Phase,Mean Absolute Error,Transfer Learning,Simulation Environment,Learning Classifiers,Batch Normalization Layer,Deep Reinforcement Learning,Variety Of Objects,Huber Loss,Transfer Learning Method,CAD Model,Joint State,Mean Square Error Loss,Categorical Cross-entropy,Batch Normalization,Max-pooling,Loss Function,Dense Layer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要