Develop a module to detect the location of action within 5m in front of the robot, such as the number of people within proximity.
Develop a robust automatic speech recognition module that works even in noisy environments, assuming a distance of 1m between the robot and its conversation partner.
Develop a speech synthesis module to produce high quality speech, enabling easy listening even for children, elderly people, and bystanders in the environment.
Develop a module to understand the status and intentions of a person from face and eye information, by measuring the mouth opening and closing state.
Develop a speech recognition module to recognize the intended emotion category of intentions that are conveyed by the attitude and speaking styles of dialogue partners (focusing on non-lexical speech).
Develop three functional modules for the generation of robot gestures according to the situation: "Dynamically Generated", "Automatic Generation", and "Behavioral Synthesis".
Develop a module for interactive content management, to switch the flow of conversation according to the response of the dialogue and situation, and select the appropriate attributes and conditions of dialogue and interaction for control.
Using facial recognition technology to identify individuals, develop modules tailored to their interests and needs.
Dialogue history management module
Develop a new interactive module that allows information accumulated from dialogue history to be reflected in new dialogue.