Robotic assistive feeding holds significant promise for improving the quality of life for individuals with eating disabilities.
However, acquiring diverse food items under varying conditions and generalizing to unseen food presents unique challenges.
Existing methods that rely on surface-level geometric information (e.g., bounding box and pose) derived from visual cues (e.g., color, shape, and texture) often lacks adaptability and robustness,
especially when foods share similar physical properties but differ in visual appearance.
We employ imitation learning (IL) to learn a policy for food acquisition.
Existing methods employ IL or Reinforcement Learning (RL) to learn a policy based on off-the-shelf image encoders such as ResNet-50.
However, such representations are not robust and struggle to generalize across diverse acquisition scenarios.
To address these limitations, we propose a novel approach, IMRL (Integrated Multi-Dimensional Representation Learning), which integrates visual, physical, temporal, and geometric representations to enhance the robustness and generalizability of IL for food acquisition.
Our approach captures food types and physical properties (e.g., solid, semi-solid, granular, liquid, and mixture), models temporal dynamics of acquisition actions, and introduces geometric information to determine optimal scooping points and assess bowl fullness.
IMRL enables IL to adaptively adjust scooping strategies based on context, improving the robot's capability to handle diverse food acquisition scenarios.
Experiments on a real robot demonstrate our approach's robustness and adaptability across various foods and bowl configurations, including zero-shot generalization to unseen settings.
Our approach achieves improvement up to 35% in success rate compared with the best-performing baseline.