The main thrust of our approach will be the enhancement of computer vision techniques with modalities such as range sensor images, haptic information as well as command-level speech and gesture recognition. Data-driven multimodal human behavior analysis will be conducted and behavioral patterns will be extracted. Findings will be imported into a multimodal human-robot communication system, involving both verbal and nonverbal communication and will be conceptually and systemically synthesized into mobility assistance models taking into consideration safety-critical requirements. All these modules will be incorporated in a behavior-based and context-aware robot control framework.