Lab2018


Upcoming GVLab seminars

Profile photo for Paulina

2024-4-11 16:00~17:00 Hybrid @UT Eng. Blg 2 room 31A join online

Nao in the Wild: Sensitive groups and other challenges

Paulina Zguda, Jagiellonian University, Poland

Abstract: What is the aim of the HRI environment - to create robots that belong exclusively in the laboratory, or those that are meant to assist people in the environments they frequent? In the field of social robotics, there is growing interest in the in-the-wild approach, which involves enabling people to interact with robots in familiar spaces and using familiar activities. In the forthcoming presentation, I will present several studies conducted with my team from the Jagiellonian University involving two particularly sensitive groups of participants - children and older adults. Our approach aims to identify and address the challenges facing social robotics that extend beyond the boundaries of the laboratory. Additionally, I will describe several socially significant factors that people may pay attention to during interaction with robots.

Biography: Paulina Zguda is a cognitive scientist and currently a PhD candidate in philosophy at the Doctoral School in Humanities at Jagiellonian University in Krakow, Poland. Her main research interest is how people react to social robots, and in particular, how their aesthetics and behaviour determine people’s perceptions of these artificial agents.

Profile photo for Dr. Ishigaki

2024-4-25 14:00~15:00 Hybrid @UT Eng. Blg 2 room TBD join online

Dynamics Gradient Calculation and Fast Dynamical Simulation of Flexible Rods

Taiki Ishigaki, The University of Tokyo, Japan

Abstract: Flexible rods are used to achieve dynamic motion in sports, such as sports prostheses and golf club shafts. In the field of soft robotics, research is being conducted to realize dynamic motion using the elastic component of flexible structures, and its simulation techniques are also important. In this talk, the modeling and dynamics calculation method for flexible rods, and the dynamics gradient calculation method which extends the algorithm for rigid-linked systems proposed by the presenter will be introduced, and fast forward dynamics simulations will be presented. Models integrating flexible structures and rigid link systems and their applications will also be presented.

Biography: Dr. Taiki Ishigaki recieved Ph.D. from the Department of Mechano-Informatics, The University of Tokyo in 2024. His research focuses on the dynamic calculation, simulation and control of robots including rigid and soft structures, espesially, humanoid robots; he is also interested in its application to human motion analysis.

Past seminars

Profile photo for Dr. Jouen

2024-3-7 16:00~17:00 Hybrid @UT Eng. Blg 2 room 31B join online

Modifying Social and Spatial boundaries through Human-Robot Interactions

Dr. Anne-Lise Jouen, University of Burgundy, France

Abstract: Interactions between humans and robots can be utilized across various contexts such as healthcare, education, and rehabilitation. This presentation will explore two distinct perspectives on Human-Robot Interaction (HRI). First, we will introduce our work on social interactions with robots (human-human-robot interactions), derived from a participatory science project conducted in a nursing home (EPHAD), aimed at enhancing interactions among the elderly through the use of a robotic narrative assistant. This preliminary study revealed a gradual increase in interactions among the elderly, along with a significant interest in robotic technologies. The second aspect of this presentation will focus on a very different facet of HRI with a more neuroscientific perspective: robotic embodiment (i.e., being placed in the position of a robot, seeing through its eyes). This will delve into the impact of robotic embodiment on our spatial representations. Through several studies involving robotic telepresence and VR embodiment, we have demonstrated the substantial impact of these technologies on our perceptions of our bodies and space. These diverse studies will be discussed within the broader context of evaluating the impact of emerging technologies using reliable metrics, including neuroscientific and psychometric measurements.

Biography: Dr. Anne-Lise Jouen obtained a PhD in Neuroscience from the University of Lyon, specializing in neuroimaging. Although initially focused on language and speech, she harbors a deep passion for new technologies, particularly robotics, having completed her doctorate at a robotics laboratory (Dr. Peter Dominey's Robot Cognition Lab). Throughout her various postdoctoral positions (at the University of Paris, Tokyo, and Geneva), she has endeavored to develop an approach combining neuroscientific and psychometric measures to objectively assess the impact of new technologies in the fields of education and rehabilitation. Her recent work at the University of Burgundy has led her to explore the complex notion of the Self and the impact of robotic and VR technologies on our spatial and bodily representations.

Profile photo for Prof. Chandra

2024-2-24 14:00~15:00 Hybrid @UT Eng. Blg 2 room 231 join online

Exploring Child-robot Interaction: Insights, Roles, and Challenges

Prof. Shruti Chandra, Specially Appointed Assistant Professor, Tokyo Institute of Technology, Japan.

Research Fellow, University of Waterloo, Canada.

Upcoming Assistant Professor, University of Northern British Columbia, Canada.

Abstract: While we may soon have AI-based artists or scientists, we are nowhere near autonomous robot plumbers. The human brain still largely outperforms robotic algorithms in most tasks, using computational elements 7 orders of magnitude slower than their artificial counterparts. Similarly, current large scale machine learning algorithms require millions of examples and close proximity to power plants, compared to the brain's few examples and 20W consumption. We study how modern nonlinear systems tools, such as contraction analysis, virtual dynamical systems, and adaptive nonlinear control can yield quantifiable insights about collective computation and learning in large physical systems and dynamical networks. For instance, we show how stable implicit sparse regularization can be exploited online in adaptive prediction or control to select relevant dynamic models out of plausible physically-based candidates, and how most elementary results on gradient descent and optimization based on convexity can be replaced by much more general results based on Riemannian contraction.
Time permitting, we will discuss a new approach to dense associative memories and transformers directly inspired by astrocyte biology. This may be the first contribution to AI of neuroscience results from the last 50 years.

Biography: Dr. Shruti Chandra holds a Joint PhD degree in Electrical and Computer Engineering from École Polytechnique Fédérale de Lausanne, Switzerland and Instituto Superior Técnico, Portugal.  Her work focuses on using interactive technologies such as social robots and screen-based interfaces to support people’s well-being, emphasising real-world applications. Her research is deeply rooted in three essential domains: human-centred design, autonomous and interactive systems, and the dynamics of social interactions.

Profile photo for Prof. Slotine

2024-1-24 10:00~11:30 Hybrid @UT Eng. Blg 2 room 232 join online

Stable adaptation and learning

Jean-Jacques Slotine, Massachusetts Institute of Technology

Abstract: While we may soon have AI-based artists or scientists, we are nowhere near autonomous robot plumbers. The human brain still largely outperforms robotic algorithms in most tasks, using computational elements 7 orders of magnitude slower than their artificial counterparts. Similarly, current large scale machine learning algorithms require millions of examples and close proximity to power plants, compared to the brain's few examples and 20W consumption. We study how modern nonlinear systems tools, such as contraction analysis, virtual dynamical systems, and adaptive nonlinear control can yield quantifiable insights about collective computation and learning in large physical systems and dynamical networks. For instance, we show how stable implicit sparse regularization can be exploited online in adaptive prediction or control to select relevant dynamic models out of plausible physically-based candidates, and how most elementary results on gradient descent and optimization based on convexity can be replaced by much more general results based on Riemannian contraction.
Time permitting, we will discuss a new approach to dense associative memories and transformers directly inspired by astrocyte biology. This may be the first contribution to AI of neuroscience results from the last 50 years.

Biography: Jean-Jacques Slotine is Professor of Mechanical Engineering and Information Sciences, Professor of Brain and Cognitive Sciences, and Director of the Nonlinear Systems Laboratory. He received his Ph.D. from the Massachusetts Institute of Technology in 1983, at age 23. After working at Bell Labs in the computer research department, he joined the faculty at MIT in 1984. Professor Slotine teaches and conducts research in the areas of dynamical systems, robotics, control theory, computational neuroscience, and systems biology. One of the most cited researchers in systems science, he was a member of the French National Science Council from 1997 to 2002, a member of Singapore’s A*STAR SigN Advisory Board from 2007 to 2010, a Distinguished Faculty at Google AI from 2019 to 2023, and has been a member of the Scientific Advisory Board of the Italian Institute of Technology since 2010.

Profile photo for Prof. Tanaka

2023-12-21 13:00~14:00 Hybrid @UT Eng. Bld 2 room 31A  join online

「高齢者・障害者の健康・医療・福祉機器開発研究」
A study of health, medical, and assistive devices for the elderly and the disabled.

田中敏明, 北海道科学大学・東京大学先端科学技術研究センター/高齢社会総合研究機構

Abstract: 約30年以上にわたり高齢者および障害者・患者のためのリハビリテーション科学および生活支援のための人間工学および福祉工学と関連させながら研究を遂行してきた。この間、一貫して、臨床現場において高齢化に伴う運動・感覚・脳機能障害を福祉工学の観点から支援する手法と技術の研究を実施してきた。 具体的な研究としては、「高齢者のための感覚フィードバック型バランストレーニング研究」があり、立位バランス検査トレーニング用医療機器を産官学連携のもとで特許実用化し、製品化への道を拓き臨床で使用されている。次に、「高次脳機能障害における空間無視者の日常生活評価訓練システムの研究」があり、これは、視空間の認知障害を有する患者は歩行や車椅子操作に障害を来すためこのバリアーを解消するうえでバーチャルリアリティ(VR)における3次元視覚ディスプレイを利用して空間無視の症状を解析するとともに、リハビリ手法を開発した。これらの研究は、「遠隔リハビリテーションシステム(総務省SCOPE)」や認知症患者のための「車椅子操作注意喚起システム開発研究」へ発展している。その他に「産・官・学連携研究」の成果として、NEDO等による支援を受けて凍結路面用ワンタッチ式杖、腰痛予防用除雪器具、農作業軽労化用具など多くの福祉用具の実用化・製品化に地域貢献している。 本講演では、転倒予防リハトレーニング機器、バーチャルリアリティ技術を用いたリハビリテーション、その他、産官学連携研究等で製品化を目指している医療福祉機器に関して講演する。

Biography: 田中 敏明 (タナカ トシアキ) 理学療法士、博士(工学)、 人間工学専門家
1984年 北海道大学医療技術短期大学部理学療法学科卒業
札幌医科大学附属病院にて理学療法士勤務
1992年 ニューヨーク大学・大学院・理学療法学科・修士課程
     病態運動学専攻・修了, Master of Arts (Physical Therapy)
1999年 札幌医科大学保健医療学部理学療法学科助教授
2003年 マサチューセッツ工科大学客員研究員
2006年 札幌市立大学デザイン学部教授
2008年 東京大学先端科学技術研究センター・特任教授
2015年 北海道科学大保健医療学部理学療法学科教授、東京大学高齢 社会総合研究機構特任教授 
2017年 フランス国立社会科学高等研究院日仏財団シニアフェロー
2020年 東京大学高齢社会総合研究機構シニアプログラムアドバイザー(現在に至る

Profile photo for Jacqueline Urakami

2023-10-19 15:00~16:00 Hybrid @UT room Eng. 2 room join online

Human augmentation at Kyocera's Future Design Laboratory

Dr. Jacqueline Urakami, Kyocera's Future Design Research Laboratory, Japan

Abstract: At Kyocera's Future Design Research Laboratory situated in Yokohama, we are advancing R&D for human augmentation technologies, with the aim of achieving a seamless integration between "humans" and "technology." Human augmentation is an interdisciplinary field that encompasses various methods, technologies, and applications designed to harmoniously merge the distinctive capabilities of both individuals and machines. In this presentation, we will provide an introduction to three innovative systems that have been developed within our Laboratory: 1) A walk sensing and coaching system, designed to enhance proper walking posture and technique. 2) A physical avatar, dedicated to mitigating the challenges of isolation during remote work. This avatar facilitates smooth communication akin to in-office interactions, thereby fostering effective communication among employees and teams dispersed across different locations. 3) Perception and cognition augmentation, culminating in a device capable of capturing and reproducing audio conversations missed during initial encounters. Furthermore, the presentation will delve into the intricate challenges associated with social signal processing for human augmentation, particularly when applied to real-world scenarios.

Biography: In 2022, Jacqueline Urakami became a researcher at Kyocera’s Future Design Research Laboratory. Her primary area lies in the field of Human Augmentation, with a specific focus on exploring the impact of human augmentation on human performance. Jacqueline Urakami received a Master's degree in Psychology from Dresden University (Germany) in 1998 and completed a Ph.D. in Psychology from Chemnitz University of Technology (Germany) in 2002. Following the completion of her doctoral studies, she spent two years as a Humboldt Fellow at Keio University's Shonan Fujisawa Campus, where she conducted research in interface design utilizing eye tracking technology. Throughout her career, Dr. Urakami has amassed valuable experience in Japan, including teaching roles at Keio University and a position as a Professor at the Tokyo Institute of Technology. Her research endeavors have been predominantly concentrated in the domains of Human-Computer Interaction and Human-Robot Interaction.

Robot x city round-table poster

Profile photo for Adriana Tapus

2023-8-22 11:00~12:00 Hybrid @UT room Eng. 2 room 233 join online

Challenges in Social Robotics: Long-term Interaction, Personalization and Trust

Prof. Adriana Tapus, ENSTA Paris, Institut Polytechnique de Paris, France

Abstract: Social robots are more and more part of our daily lives, and the design of their behaviors greatly affects the way people interact with them. To ensure optimal engagement, long-term adaptation and personalized robot's behavior to the user's specific needs and profile should be envisaged. One important social construct to consider is humor. Humor has been shown to reduce communication barriers and enhance interpersonal relationships. Developing a robot with the ability to express various forms of humor can enhance its naturalness and effectiveness in human-robot interactions. This presentation will explore innovative perception and interaction capabilities and address the challenges raised by inter-individual differences and intra-individual variability over time.

Biography: Adriana TAPUS is Full Professor in the Autonomous Systems and Robotics Lab in the Computer Science and System Engineering Department (U2IS), at ENSTA Paris, Institut Polytechnique de Paris, France. Since 2019, she is the Director of the Doctoral School of the Institut Polytechnique de Paris (IP Paris). Prof. Tapus serves as one of the member of the Women in Science and Engineering Committee at IP Paris. In 2011, she obtained the French Habilitation (HDR) for her thesis entitled “Towards Personalized Human-Robot Interaction”. She received her PhD in Computer Science from Swiss Federal Institute of Technology Lausanne (EPFL), Switzerland in 2005. She worked as an Associate Researcher at the University of Southern California (USC), where she was among the pioneers on the development of socially assistive robotics, also participating to activity in machine learning, human sensing, and human-robot interaction. Her main interests are on long-term learning (i.e. in particular in interaction with humans), human modeling, and on-line robot behavior adaptation to external environmental factors. She worked on various applications going from socially assistive applications for helping people with physical and cognitive impairments (e.g., children with autism, the elderly, people suffering of sleep disorders, people in rehabilitation after a stroke) to autonomous vehicles. Prof. Tapus is a Senior Editor of International Journal on Robotics Research (IJRR), an Associate Editor for International Journal of Social Robotics (IJSR), an Associate Editor for ACM Transactions on Human-Robot Interaction (THRI), and Associate Editor for Frontiers in Robotics and AI. She is member of the program and steering committee of several major robotics conferences (e.g., General Chair 2019 of HRI, Program Chair 2018 of HRI, General Chair 2017 of ECMR). Prof. Tapus was the Keynote Speaker at several workshops and conferences. She has more than 200 research publications. She was elected in 2016 as one of the 25 women in robotics you need to know about. Prof. Tapus received the Romanian Academy Award for her contributions in assistive robotics in 2010 and in 2022 she was awarded by the French Prime Minister the Knight of the Academic Palms. She is member of IEEE, AAAI and ACM. She is also the coordinator and member of multiple EU and French National research grants (e.g., EU ENRICHME, EU RAICAM, Bots4Education). Further details about her research and activities can be found at https://perso.ensta-paris.fr/~tapus/eng/index.html

Profile photo for Katie Seaborn

2023-7-25 11:00~12:00 Hybrid @UT or join online

Intersectionality and AI: Gender, Age, and Everything in Between

Prof. Katie Seaborn, Tokyo Institute of Technology, Japan

Abstract: People are diverse. People create machines. People may or may not embed human diversity within those machines. Even when they do, they may not realize it or foretell the implications. In this talk, I will introduce the notion of intersectionality and fundamental "sections" and "intersections," especially gender and age, and how these relate to research and practice in robotics and AI. I will cover case studies from my own work on humanoid robots, translation in natural language processing, and computer voice.

Biography: : Prof. Katie Seaborn is an Associate Professor in the Department of Industrial Engineering and Economics at Tokyo Institute of Technology. Since 2020, she has led the Aspirational Computing Lab at Tokyo Tech. She currently holds the positions of Visiting Researcher at the RIKEN Center for Advanced Intelligence Project (AIP) and Honorary Researcher at the UCL Interaction Centre (UCLIC). She previously worked as a Postdoctoral Researcher at RIKEN AIP, Co-operative Research Fellow at the University of Tokyo (2019-20), JSPS Postdoctoral Fellow at the University of Tokyo (2018-19), and Research Fellow at UCLIC (2017-18). She received her Ph.D. in Human Factors from the Department of Mechanical & Industrial Engineering (MIE) and graduated from the Collaborative Program in Knowledge Media Design (KMDI) at the University of Toronto (2016). Her research interests include interaction design with voice-based agents, critical computing perspectives on robots and gender, inclusive design with older adults, and technologies for psychological well-being. Her seminal work on gamification has been cited over 2700 times.

Profile photo for Moju Zhao

2023-6-27 14:00~15:00 Hybrid @UT room Eng. 2 room 31A or join online

Modularity in Aerial Robotics and its Applications

Prof. Moju Zhao, the University of Tokyo, Japan

Abstract: During the last decade, research on aerial robots has become significantly active. Among the developments of the platform, several modular designs have been proposed to offer the reconfigurable capability for advanced maneuvering in midair. In this talk, we will present the development of our original modular aerial robots, which involves the methodology of modular design, modelling and control, and motion planning. Furthermore, the unique aerial application, such as snake-like maneuvering and manipulation in midair will be also introduced.

Biography: Prof. Moju Zhao is currently an Assistant Professor at The University of Tokyo. He received Doctor Degree from the Department of Mechano-Informatics, The University of Tokyo, 2018. His research interests are mechanical design, modelling and control, motion planning, and vision based recognition in aerial robotics. His main achievement is the articulated aerial robots which have received several awards in conference and journal, including the Best Paper Award in IEEE ICRA 2018.

Profile photo for Bruno Watier

2023-5-16 11:00~12:00 Hybrid @UT or online

At the frontiers of biomechanics and robotics

Bruno Watier, LAAS, France

Abstract: This presentation will be an opportunity to give an overview of current projects at LAAS-CNRS at the interface of robotics and biomechanics. We will see achievements in the field of gait simulation, exoskeleton design or human-robot interaction.

Biography: Bruno Watier is full-time professor. He joined JRL as CNRS delegate in 2023. Since 1999, he has been Associate Professor at Universite de Toulouse 3 (UT3), LAAS-CNRS laboratory, France, in the Gepetto team. He received the Ph.D. degree in mechanics from ENSAM in 1997 and the Habilitation degree from UT3 in 2015. His research focuses on human movement analalysis, motor control and human-robot interaction. He organized several conferences and workshops related to biomechanics. Bruno Watier currently leads 4 french national ANR project. He is president of the Société de Biomécanique and leads a major degree in sport performance.

Profile photo for Tianwei Zhang

2023-4-18 11:00~12:00 Hybrid @UT Blg 2 room 223 or online

A Medical Robot for Physical-HRI based Nasopharyngeal Swab Sampling

Prof. Tianwei Zhang, Chinese University of Hongkong, China

Abstract: In these three years, nasopharyngeal (NP) swab sampling and reverse-transcription Polymerase Chain Reaction (RT-PCR) test techniques have proven to be very effective and reliable tests for the early detection of COVID-19 infected individuals. However, this highly reliable RT-PCR testing is difficult to operate and must be operated by a dedicated swab and trained medical personnel. In this work, we present a robot that operates nasopharyngeal swabs for RT-PCR sampling. We designed new hardware system and developed advanced 3D vision and force sensing hybrid perception algorithms for the robot to accurately locate the nostril position, manipulate the swab into the nostril and then carefully advance the swab along the inferior nasal tract to the designated sampling position in the posterior nasal cavity. The experimental results of more than 8,000 volunteer experiments indicates the proposed robot PCR sampling results are as accurate as human. Questionnaires show that our robot NP swab sampling is more comfortable than manual sampling.

Biography: Prof. Tianwei Zhang is currently an Associate Research Scientist at Shenzhen Institute of Artificial Intelligence and Robotics for Society, Research Assistant Professor in the Chinese University of Hongkong, Shenzhen. He received Doctor Degree from the Department of Mechano-Informatics, The University of Tokyo, 2019. His research interests are visual manipulation and dynamic SLAM. He has published several articles in ICRA/IROS/RAL.

Profile photo for Rachel Love

2023-2-16 16:00~17:00 Hybrid @UT or online

Adaptive Dialogue Strategy for Teachable Social Robots

Rachel Love, Monash University, Australia

Abstract: Social robots have great potential when used in an educational setting, with the ability to deliver personalised, one-on-one interactions for students. Robots that take on the role of the student, while the student takes on the role of teacher, have the ability to improve learning outcomes through increased engagement in, and responsibility for the teaching task. Social robots may also benefit from adapting their behaviours to meet the needs and preferences of individual users. The research presented in this talk uses a reinforcement learning approach to adapt the dialogue choices that the teachable robot makes at different points in the teaching conversation. This teaching interaction uses the Curiosity Notebook, a flexible online teaching platform that helps students learn about a simple classification task. The aim of this research is to improve the learning outcomes and engagement of the student, through an adaptive, personalised approach. This talk will discuss the motivations and findings of this research.

Biography: Rachel is a third year PhD candidate in the Robotics lab within the department of Electrical and Computer systems Engineering at Monash University. She obtained her Bachelor’s in Biomedical Engineering with Honours from the University of Auckland. This was followed by several years of industry experience working for the Auckland-based start-up Soul Machines as a Conversation Engineer, developing dialogues and conversational emotional and behaviours for their artificially and emotionally intelligent Digital Humans. Her particular research interests lie in conversational AI, dialogue modelling, and human-machine interaction.

Catherine Pelachaud

2023-1-31 11:00~12:00 Hybrid @UT or online

Interacting with Socially Interactive Agent

Prof. Catherine Pelachaud, Sorbonne University, France

Abstract: Our research work focuses on modeling Socially Interactive Agents, i.e. agents capable of interacting socially with human partners, of communicating verbally and non-verbally, of showing emotions but also of adapting their behaviors to favor the engagement of their partners during the interaction. As partner of an interaction, SIA should be able to adapt its multimodal behaviors and conversational strategies to optimize the engagement of its human interlocutors. We have also been working on endowing an agent to respond to social touch by a human and to touch the human to convey different intentions and emotions. We have developed models to equip these agents with these communicative and social abilities. In this talk, I will present the works we have been conducted.

Biography: Catherine Pelachaud (CNRS-ISIR) is Director of Research in the laboratory ISIR, Sorbonne University. Her research interest includes socially interactive agent, nonverbal communication (face, gaze, gesture and touch), and adaptive mechanisms in interaction. With her research team, she has been developing an interactive virtual agent platform, Greta, that can display emotional and communicative behaviors.  She has participated in the organization of international conferences such as IVA, ACII and AAMAS. She is and was associate editors of several journals among which IEEE Transactions on Affective Computing, ACM Transactions on Interactive Intelligent Systems and International Journal of Human-Computer Studies. She is co-editor of the ACM handbook on socially interactive agents (2021-22).

Cesar Hernandez

2022-12-21 17:00~18:00 Hybrid @UT or online

Sharing scientific sports training expertise by video motion capture, a time-series database and open source visualization tools

Dr. Cesar Hernandez Reyes, the University of Tokyo, Japan

Abstract: Sports and exercise are valuable habits. Having a training goal can improve a person's mental and physical health. Such training goals can be achieved more effectively with information technologies. For example, a person can set quantitative goals by visualizing their muscle forces and joint motions. However, obtaining this data is complex because motion capture (Mocap), force plates, EMG sensors, etc, are costly and require technical expertise to use. Furthermore, analyzing biomechanical data requires the knowledge of sports coaches, clinicians, etc. This talk will introduce ongoing research which aims to build a system by which scientific training can be accessible to the general public. This system combines markerless video Mocap and a time-series database platform optimized for managing and visualizing human musculoskeletal data. Finally, this talk will discuss ideas to identify the difference between how an expert athlete chooses their motions compared with a novice by using Inverse Reinforcement Learning.

Biography: Cesar Hernandez Reyes is a Postdoctoral Researcher at the Human Motion Data Science lab, Graduate School of Engineering at the University of Tokyo since October 2021. He is currently working on the democratization of scientific sports training by developing technologies to bridge the expertise gap between engineers, sports scientists and general users. He obtained his Doctoral and Masters Degree in Systems and Control Engineering from the Tokyo Institute of Technology in 2021 and 2018, respectively. Also, he obtained his Bachelor degree in Mechatronics Engineering from Universidad de Monterrey in Mexico. His previous research experience includes the computational modeling of the olfactory search behavior of the silk moth and the fusion of probabilistic and bio-inspired olfactory search algorithms for application in mobile robots.

Quentin Peyron

2022-11-2 11:00~12:00 Hybrid @TUAT GVLab or online

Modeling, design and control of soft robotics systems: a tour of Defrost team's research

Dr. Quentin Peyron, INRIA, France

Abstract: Soft robotics systems consist in flexible structures that are deformed with large displacements to produce motion and perform various tasks. Due to their intrinsic compliance, they are particularly suited for applications involving narrow and cluttered environments and physical interactions with humans. However, this compliance also induces unique challenges and open research questions: Their kinemato-static and dynamic modeling in real-time, the exploration of their large design space, and their control. The Defrost team of INRIA and CRIStAL in Lille has developed expertise in these three aspects. I will present our recent works on efficient finite element and Cosserat beam models for soft robots interacting with their surroundings, and the associated software developments through the simulation platform and consortium SOFA. I will then show some results on soft robot design, using evolutionary optimization algorithms and anisotropic meta-materials, as well as some of the prototypes developed by the team. Finally, I will present our progress on soft robot control using non-linear control theories and AI and will finish with perspectives for the years to come.

Biography: Dr. Quentin Peyron is a research scientist at the INRIA institute, France, working with the Defrost team. After obtaining an engineering degree in mechatronics from INSA Strasbourg, and a master in robotics from the University of Strasbourg, he obtained a PhD in robotics from the University of Bourgogne Franche Comté. During his PhD, he worked in co-supervision with the ICube (Strasbourg) and FEMTO-ST (Besançon) laboratories on the modeling, analysis, and design of slender and thin continuum robots for minimally invasive surgery. He also collaborated with the MSRL team of ETH Zürich on magnetic continuum robots. He was a Postdoctoral Fellow at the Continuum Robotics Lab of the University of Toronto from 2020 to 2021, where he worked on tendon-driven continuum robots. During his Post-Doc, he received the Post-Doctoral Fellowship Award from the University of Toronto. He joined INRIA in January 2022. Dr. Peyron is a reviewer for IEEE RA-L and T-RO journals, as well as for Mechanisms and Machine Theory and Autonomous Robots. His research interests are the modeling, design, and control of continuum and soft robots, and the development of eco-designed soft robotics for industrial applications.

Katsu Yamane

2022-10-20 15:00~16:00 Hybrid @UT Eng. 2 room 232 or online

A Career in Corporate Robotics Research

Dr. Katsu Yamane, Path Robotics, USA

Abstract: This talk is a personal reflection upon my career of the past 14 years working in robotics at corporate research labs of Disney, Honda, Bosch, and now Path Robotics. I will first present my thoughts on how the size, industry and culture of a company determine how its R&D department operates and how, in turn, your experience as a researcher is affected. I will then illustrate different roles of corporate research by showing examples of how projects are initiated, managed, and transferred (or shelved). Finally, I will introduce Path Robotics' technology and why I believe we can become a (rare) successful robotics company. Hopefully this talk will help students and young researchers choose their future careers in robotics, whether it be academia, industry, or entrepreneur.

Biography: Dr. Katsu Yamane is currently a Principal Research Scientist at Path Robotics Inc., a late-stage startup specializing in autonomous welding technology. He has held research scientist positions at Disney Research, Pittsburgh, Honda Research Institute USA, and Bosch Research North America. Prior to moving to industry, he was a postdoctoral researcher at Carnegie Mellon University and a faculty member at the University of Tokyo where he also received his PhD in mechanical engineering in 2002. Dr. Yamane's research experience spans from manipulation planning and control to human motion analysis and biomechanics. He has also been active in the academic community as an editor and organizer of various journals and conferences as well as an author of over 100 peer-reviewed technical publications.

Luis Sentis

2022-9-27 11:00~12:00 Hybrid @UT Eng. 2 room 233 or online

Embodiment and Explainable Behaviors for Teaming Up with Human-Centered Robots

Prof. Luis Sentis, U. Texas at Austin, USA

Abstract: In this talk I will first delve into control architectures and embodiment for legged manipulators such as NASA's Valkyrie humanoid robot and other custom-built bipeds and humanoid robots. Emphasis will be placed on trajectory generation, dynamic walking and trajectory tracking using whole-body prioritized multi-level task control and model predictive control. I will then explain key ideas on training neural networks to learn the physical outcome of a commanded goal in terms of their physical success given the interaction environment. Predicting if a robot will encounter collisions, singularities, or balance problems at runtime is key to provide feedback to human operators as a means to explain the robot's physical behavior before task execution. We will show how this property can be used in combination with a cognitive architecture to have natural spoken interactions between human users and robots. In the next part of the talk, I will discuss teaming up with robots by employing imitation learning techniques and the use of thin-film epidermal electrodes for brain activity sensing. Finally, I will discuss our current efforts in human autonomy teaming in tasks such as indoor and outdoor object search with mixed teams of robots and  humans.

Biography: Luis Sentis is a professor at the University of Texas at Austin and executive member of Good Systems where he heads the Human-Centered Robotics Laboratory, focusing on embodiment, motion planning, and control of human-centered robots such as legged robots, humanoids and exoskeleton systems. More recently he leads and collaborates in new projects regarding multi-robot search in indoor and outdoor environments, perceptual legged navigation in dynamic and crowded environments, and the use of wearable brain sensors for human factor studies. In 2015 he co-founded Apptronik, a company building next generation humanoid avatars.

Nawelle Zaidi

2022-7-27 11:00~12:30JST Hybrid (contact us to join in person) Join online

N. Zaidi (Strate, France)

Impacts of SF imaginaries on robots design: low diversity of productions and how to better design through imaginaries?

Abstract: The increasing development of digital technologies such as social robotics in the last decades has already shown some of its social limits. Many studies have focused on the negative impacts of the technocentric nature of the technological artefacts design whose meaning and usefulness are not obvious for the end users, and whose introduction into real-life ecosystems can disrupt pre-existing organizations. Participatory approaches have been developed to take greater account of the complexity of the real world and the diversity of stakeholders by involving them further into the design process. Although they allow to create products that are better adapted to the environmental and human constraints, these approaches nonetheless seem to carry a bias specific to new technologies: the existence for all stakeholders (roboticists, designers or end users) of strong imaginaries related to technological objects that mainly come from Science-Fiction. In a context of social, environmental and health crisis where dystopian fictions about technology become a norm, the question of the influence of imaginaries on the design of technologies arrises. How do these imaginaries impact the design process? Why do they seem particularly important in HRI and social robotics? Which design methods can be developed to take this bias into account in the design process, and even better design through those imaginaries? This talk aims to share a piece of reflection about this topic and to open a conversation on how to raise awareness about unconscious imaginaries in our research projects.

Biography: Nawelle Zaidi is a PhD student in Design at the Projekt Lab of the University of Nîmes, France and at the Robotics By Design Lab of Strate School of Design, France. Her research focuses on the design of social robots for older people and caregivers in medical nursing facilities. She is conducting a practice-based research in a company of geriatric institutions in France. She previously received a Master degree in generalist engineering (spe. images and signal processing) from Ecole Centrale Marseille in 2016 and a Master degree in Interaction Design from Strate School of Design in 2018, and worked for a couple of years as a UX Designer in the industry.

Dominique Lestel

2022-6-23 15:00~16:30JST Hybrid (contact us to join in person)

Prof. D. Lestel (Ecole Normale Superieure, France)

Evolutionary Challenge of Animal Robots

Abstract: Robots and AI are not just machines that make it possible to do new things but ontological artifacts that profoundly change what it means to be alive. Robots and AI are part of a special category of non-biological living agents to which we can also attach other artifacts such as dolls, puppets or fetishes. Many robots look like animals but they are non-biological animal “transpecies” that do not belong to any species and that profoundly transform the ecology of life on Earth. This talk, accessible to non-philosophers, will discuss some of the problems posed by these disruptive machines that engage us in a major ecological revolution that has nothing to do with global warming.

Biography: Dominique Lestel teaches contemporary philosophy at the Ecole Normale Supérieure in Paris and is a tenured researcher of the Husserl Archives. He was a member of the French-Japanese Laboratory of Informatics at the University of Tokyo in 2013-2014, a Visiting Professor in the Department of Mechanical Systems Engineering (in the GVLab) of Tokyo University of Agriculture and Technology with a JSPS Fellowship (2017-2018) and a Berggruen Fellow at the Center of Advanced Studies in the Behavioral Sciences at Stanford University (2018-2019). His latest book, Machines Insurrectionnelles (Fayard, 2021) develops a post biological theory of living agents.

2022-4-26 16:00~17:30JST Online

Prof. B. Indurkhya (Jagiellonian University, Poland)

Faking emotions and a therapeutic role for robots and chatbots: Ethics of using AI in psychotherapy

In recent years, there has been a proliferation of social robots and chatbots that are designed so that users make an emotional attachment with them. Such robots and chatbots can also be used to provide psychotherapy. In this talk, we will start by presenting the first such chatbot, a program called Eliza designed by Joseph Weizenbaum in the mid 1960s. This program did not understand anything, but relied on keyword matches, and a few simple heuristics to keep the conversation flow and provide an illusion of understanding to the user. At that time, Weizenbaum was taken aback by the intensity of emotional attachment users felt towards this program, prompting him to highlight this negative aspect of technology in his thought provoking book "Computer Power and Human Reason".
In recent years, there has been a revival of Eliza-like systems and interfaces in social robots and chatbots. We will look at some such systems and argue that they can have a positive and therapeutic effect on the user, and that in some situations at least this kind of robot-human interaction transcends human-human interaction. However, developing and deploying such systems raise a number of ethical issues, some of which we will discuss in this talk.

Biography: Bipin Indurkhya is a professor of Cognitive Science at the Jagiellonian University, Krakow, Poland. His main research interests are social robotics, usability engineering, affective computing and creativity. He received his Master’s degree in Electronics Engineering from the Philips International Institute, Eindhoven (The Netherlands) in 1981, and PhD in Computer Science from University of Massachusetts at Amherst in 1985. He has taught at various universities in the US, Japan, India, Germany and Poland; and has led national and international research projects with collaborations from companies like Xerox and Samsung.

2022-3-14 11:00~12:30JST Online

Prof. V. Hernandez (Surfclean, Japan)

Adaptive virtual reality video game and machine learning

Abstract: In this talk, I will present an overview of an industrial project I am leading as well as my ongoing research on human activity recognition at GVLAB. This project aims at developing adaptive video games in virtual reality. Virtual reality technology is an attractive complementary tool for rehabilitation and can make an important contribution to home rehabilitation. To maximize its effectiveness, the system is designed to provide an adaptive virtual environment based on the success or failure at achieving a specific movement goal. To this end, various machine learning algorithms are being investigated.

Biography: Vincent Hernandez obtained his PhD in Human Movement Sciences in 2016 from the University of Toulon, France. In 2017, he was a postdoctoral researcher at Tokyo University of Agriculture and Technology (TUAT), Tokyo, Japan. Between 2018 and 2019, he was a postdoctoral researcher at the University of Waterloo, Ontario, Canada. He is currently an adjunct associate professor at TUAT as well as a project manager at SurfClean Inc, Sagamihara, Japan. His research interests are mostly focused on human activity recognition and adaptive virtual reality video game development.

2022-1-27 10:30~12:00JST Online

Dr. M. JANG (ETRI, Korea)

Introduction to Research Efforts on Robot AI for Elderly-Care

Abstract: In this talk, I introduce research efforts and results from our project at ETRI on developing robot AI technologies for elderly-care, especially focusing on: 1) domain-specific AI targetting domains of home environments and elderly people, and 2) robot intelligence for automated communicative gesture generation. I hope to share our vision for developing robots that really help people in the real-world.

Biography: Dr. Minsu JANG has been working at ETRI (Electronics and Telecommunications Research Institute) since 1999, and he received his PhD from KAIST in 2015. His research interest include social robots, human-robot interaction, robot SW integration, and artificial intelligence in general.

2021-12-23 14:30~16:00JST Online

Dr. D. Vincze (Chuo University, Japan)

Etho-Robotics, plus Reinforcement Learning for behaviour models

Abstract: One of the challenges in social robotics is creating a robot to be accepted by humans as a long-term companion. To keep up the interest on a long-term basis, a possible solution could be the construction of behaviour models for social robots based on animal behaviour. Studying animal behaviour is the main goal of Ethology, therefore by using the results of Ethology, behaviour models for robotics can be constructed. The novel field of Etho-robotics aims to create behaviour models based on ethological studies. The dog-human attachment behaviour has been transformed into a computational model using a fuzzy automaton control system incorporating sparse fuzzy rule-bases. Connecting this behaviour model to a real environment, where humans can interact with a physical robot is underway.
Various significant advances have been presented in the past decades in the field of "AI". However, nowadays more and more researchers and users have concerns about the transparency of these AI solutions. The knowledge representation used by the mainstream methods are hardly explainable or interpretable. Fuzzy Rule Interpolation-based (FRI) Reinforcement Learning (RL) is a machine learning method, which uses sparse fuzzy rule-bases for knowledge representation. In contrast to classical knowledge representation forms in RL, using sparse fuzzy rule-bases allow human experts to easily "read out" the knowledge which operates a system, as fuzzy rules are inherently self-describing. Still, constructing such rule-bases, which are small enough to be understood is challenging...
Combining these two fields, Etho-Robotics and FRI-based RL can provide us with better behaviour models for social robots and better understanding of human-animal interactions.

Biography: Prof. David Vincze is currently a JSPS postdoctoral research fellow in the Human-System Laboratory (Niitsuma Lab.) at the Department of Precision Mechanics at Chuo University, Tokyo, Japan, on leave from the Department of Information Sciences at the University of Miskolc, Hungary, where he is an associate professor. Graduated in information engineering from the University of Miskolc, and later earned his PhD in 2014 in Computer Science focusing on machine learning and human-robot interaction. His research in machine learning examines fuzzy rule-based learning systems and algorithms for knowledge extraction in a form, which could be directly interpreted by humans. His research in HRI includes designing ethologically-inspired behaviour models implemented as fuzzy control systems. Also he has been contributing to the open source community, by implementing new ideas for Linux and UNIX based systems in data centers.

2021-12-9 17:00~17:30JST Online

Prof. L. Damiano (IULM University (Milan, Italy))

'Understanding by building'. Genealogy, epistemology and relevance of the synthetic approach to the modeling of life and cognition

Abstract: "Understanding by building" is the promise of the “synthetic method”, which supports the emerging sciences of the artificial in contributing to the scientific study of life and on cognition, lato sensu, based on the construction and the experimental exploration of “software”, “hardware” and “wetware” models of living and cognitive processes. This talk proposes a reconstruction of the genealogy and the epistemology of reference of the synthetic method with two main goals: defining the novelties that this methodological approach proposes with regard to the traditional way of 'doing science', and addressing the controversial issue of its relevance for the scientific understanding of natural living and cognitive phenomena.

Biography: Luisa Damiano (PhD) is Associate Professor of Logic and Philosophy of Science at the IULM University (Milan, Italy), and the coordinator of the Research Group on the Epistemology of the Sciences of the Artificial (RG-ESA). Her main research areas are: Epistemology of Complex Systems; Epistemology of the Cognitive Sciences; Epistemology of the Sciences of the Artificial. Since 2007, she has been working on these topics with scientific teams (Origins of Life Group, University of Rome Three, Rome, Italy, SynthCells EU Project; Adaptive Systems Research Group, Developmental Robotics Division, University of Hertfordshire, Hatfield, United Kingdom, Felix Growing EU Project and Aliz-é EU Project; Graduate School of Core Ethics and Frontier Sciences, Ritsumeikan University, Kyoto, Japan, Empathy and Frontier Sciences JSPS Project and Artificial Empathy JSPS Project; currently: University of Salento, Lecce, Italy, and JAMSTEC, Yokosuka, Japan, SB-AI Project; Graduate School of Core Ethics and Frontier Sciences, Ritsumeikan University, Kyoto, Japan, Artificial Empathy Project). Among her publications there are many articles, the books Unità in dialogo (Bruno Mondadori, 2009) and Living with robots (with Paul Dumouchel, Harvard University Press, 2017, originally published in French by Seuil, 2016, in Korean by HEEDAM, 2019, and in Italian by Raffaello Cortina, 2019; currently in publication in Chinese by Peking University Press) and several co-edited journal special issues (e.g., Artificial Empathy, International Journal of Social Robotics, with Paul Dumouchel and Hagen Lehmann, 2014; What can Synthetic Biology offer to Artificial Intelligence (and vice versa), BioSystems, with Yutetsu Kuruma and Pasquale Stano, 2016; Synthetic Biology and Artificial Intelligence: Towards Cross-fertilization, Complex Systems, with Yutetsu Kuruma and Pasquale Stano, 2018).

Kashimura

2021-11-4 16:00~17:30JST Online & Blg 6 room 502

鹿志村 洋次 (元富士ゼロックス研究技術開発本部インキュベーションセンター長, 現アマダAIイノベーション研究所 所長, 兼アマダ DX戦略推進出本部長 執行役員)

日本のモノづくりの課題と事業機会

Abstract: 20世紀は半導体、自動車産業などを筆頭に日本は世界のモノづくりをリーディングしてきた。特にエレクトロニクス分野では『電子立国日本』と呼ばれ、世界の羨望を得ると共に、本経済成長を支え・成長を牽引してきた。しかし21世紀になり、IT/ICTが世界経済の牽引役になると日本産業の生産性は激落すると共に世界経済の日本企業は牽引役から後退した。 所謂GAFA+M/BATH+Kの台頭の時代が訪れた。しかし、IT/ICTの牽引力は、コロナパンデミック下では無力であり、マスク、ワクチン、医療機器などハードウエア製造などのローカルなモノづくりの脆弱性が露呈した。本講演ではこれらの背景を前提とし、日本のモノづくりの課題を明定すると共にDX化の正しい推進の在り方を提案する。更には東京農業工業大学様との協業が、アマダGの中・長期的な事業機会に貢献させるシナリオを共有する。

Biography: 1980-85年:大学・大学院 情報工学を専攻。 『打倒ムーアの法則*』で、超電導トランジスタを研究。
1985-2004年:富士ゼロックス前半生(研究者) 密着型イメージセンサー、2次元イメージセンサーの研究開発 PARC駐在でフラットパネルディスプレイ、アナログニューラルネットワークの研究帰国後理研共同研究で人型情報処理(脳型Chip、顔認証、AI:プロアクティブ情報提示支援など)担当 2004-2018年:富士ゼロックス後半生(研究マネージメント) 研究推進部長:研究機能の再定義と変革を推進 顧客価値デザインセンター長:研究所の技術の事業化と研究部門におけるPoCを実践 インキュベーションセンター長/FXPAL 取締役/イノベーションオフィス@シンガポール管掌:グローバルな研究事業開発推(UXデザイン、電子ペン、手書き文字認識、IoT・AI・ロボットなど)
2020.01-:アマダAIイノベーション研究所 所長を担当、AIを核に未来のモノ作りを訴求。
2021.10.01:アマダ DX戦略推進本部長 執行役員兼務。お客様とアマダGのDXを戦略的に推進するとともに新たな事業創出を担当。

2021-7-20 17:00~18:30JST Online

Prof. Mohan ELARA (Singapore University of Technology and Design, Singapore)

The Rise of Self-reconfigurable Maintenance Robots

Abstract: : Self-reconfigurable robots are intelligent machines capable of autonomously changing their kinematic morphologies to overcome complexities in the traversing environment or task being handled. Their promise of a high degree of versatility, robustness, and modularity is set to open up a wide range of new applications for robots. However, developing these robots are highly challenging. While some progress has been achieved, there are still many open issues. In this talk, I will share our ongoing efforts at the Singapore University of Technology and Design towards the design, development and deployment of self-reconfigurable robots using a well-defined set of deployment use cases in the maintenance domain. I will also provide an insight into our concerted efforts being a catalyst to inspire an ecosystem of Singapore based robotic startups in non-conflicting and niche domains.

Biography: Dr. Mohan is currently an Assistant Professor with the Engineering Product Development Pillar at Singapore University of Technology and Design. He received his Ph.D. and M.Sc degrees from the Nanyang Technological University. His research interests are in robotics with an emphasis on self-reconfigurable platforms as well as research problems related to robot ergonomics and autonomous systems. He has published more than 150 papers in leading journals, books, and conferences. Dr. Mohan is currently serving as an Associate Editor of the IEEE Robotics & Automation Letters and IEEE Nanotechnology Magazine. He is the recipient of the SG Mark Design, ASEE Best of Design in Engineering Award, Tan Kah Kee Young Inventors’ Award and A’ Design award. He is the co-founder of Lionsbot and Oceania Robotics, robotics companies that develop a wide range of autonomous robots for specialized niche industries. He is also a visiting faculty member of the International Design Institute at Zhejiang University, China. Dr. Mohan has served in various positions of organizing and technical committees of several international competitions and conferences.

Serena Ivaldi and Icub

2021-5-27 17:00~18:30JST Online

Dr. S. Ivaldi (INRIA, France) Collaborative robotics technologies: lessons learned using wearable sensors for ergonomics, exoskeletons and tele-operation

Abstract: From exoskeletons to cobots, collaborative robots are rapidly finding their use in the industrial domain, because of their ability to physically assist the humans in stressful and repetitive tasks. Combined with wearable sensors and artificial intelligence, they can help optimizing workstations and improving the ergonomics conditions of workers. Even humanoid robots are finding their place: as avatars, replacing the human operators in dangerous or remote environments. It is not intuitive, but this form of collaboration also requires an intelligent coupling between the human and the robot. In this talk I will present our current projects developing collaborative technologies that are grounded on human motion tracking. The first consists in activity recognition and intention prediction, which are necessary for ergonomics evaluation and optimization of human movements. They are also used for humanoid teleoperation: in particular, not only the human movement is considered in the robot’s control, but also anticipated. Lastly, I will review our current work on exoskeletons, both in terms of evaluation and lessons learned from deployment of this solution in an hospital.

Biography: Serena Ivaldi is a tenured research scientist at Inria, leading the humanoid and human-robot interaction activities of the Team Larsen in Inria Nancy, France. She obtained her Ph.D. in Humanoid Technologies in 2011 at the Italian Institute of Technology. Prior to joining Inria, she was post-doctoral researcher in UPMC in Paris, France, then at the University of Darmstadt, Germany. She has been co-coordinator and PI of several collaborative projects, such as the EU projects CoDyCo (FP7) and AnDy (H2020), concerting the development of advanced anticipatory control and interaction skills for humanoid robots and exoskeletons. She has been developing whole-body tele-operation for the iCub humanoid robot and recently pioneered the use of exoskeletons in the ICU to physically assist physicians. She is Editor in Chief of the International Journal of Social Robotics and has been serving as Associate Editor for IEEE Robotics and Automation Letters. She was Program Chair of the conference IEEE/RAS Humanoids 2019 and Tutorial Chair for CORL 2021. She is co-leader of the Humanoid Robotics Group of GDR Robotique (the French Robotics society). She was recently awarded the Suzanne Zivi Prize for excellence in research.



2021-4-26 18:00~19:00JST Online

Dr. F. Ho (AIST, Japan)

Multi-Agent Path Finding for Real World Applications  

Abstract: In the near future, autonomous vehicles, such as drones and unmanned ground vehicles, are expected to be increasingly used for a variety of applications. Hence, there is a necessity for a real-world deployment of novel multi-agent systems. In this context, several challenges need to be addressed, whereby the main objectives are ensuring safety and providing efficiency. In particular, Multi-Agent Path Finding (MAPF) has become an emerging field in the multi-agent systems and AI community, with the development of several approaches to address problems related to autonomous agents. However, several open issues and limitations remain to be addressed to effectively develop MAPF solutions for real world applications. In this talk, I will introduce the recent research performed on drones use cases in the context of Unmanned Aircraft System Traffic Management (UTM), challenges, results, and perspectives in the deployment of novel multi-agent systems.

Biography: Florence Ho obtained her master degree in Applied Mathematics & Computer Science from INP Toulouse, France in 2014 and a master degree in Operations Research from Pantheon Sorbonne University, France in 2015. She then received her PhD in Informatics from Sokendai, Japan in 2020. She is currently a researcher at NEC-AIST (National Institute of Advanced Industrial Science and Technologies) in Japan. Her research interests include Optimization, Multi-Agent Systems, Autonomous vehicles, and Air Traffic Management.



Yue

2021-1-27 15:00~16:30JST Online

Prof. Y. Hu (Tokyo University of Agriculture and Technology, Japan)

Active physical Human-Robot Interaction: a step towards “closer” robots

Abstract: Robots in science fiction have incredible capabilities and are often seen blending rather well in human society, roaming around complex populated environments. But as roboticists have well learned, real robots are still far from this possibility, with robots that are often still kept at safety distances, caged, or in isolated environments, far from human reach. Two research directions have been trying to breach the barrier between humans and robots: physically (pHRI), and socially (sHRI). However, these two directions have been evolving without many intersections, with pHRI focused mainly on the development of controllers to guarantee efficiency and physical safety, and sHRI centralized on the perception and mental state of the human. But for robots to really coexist and collaborate with humans, it is necessary to take into account both physical and social interactions, achieving an active physical human-robot interaction (active pHRI): a type of physical interaction in which the robot should be able to achieve tasks optimally, efficiently, safely, and at the same time take into account the perception of human users. In this talk, I will illustrate the experiments we performed to take a first step towards achieving active pHRI by trying to understand humans, with some insights on the first results and perspectives on future developments.

Biography: Yue Hu obtained her master degree in Advanced Robotics from the University of Genova, Italy, and Ecole Centrale de Nantes, France, in 2013. She then carried out her PhD in robotics at the Optimization in Robotics and Biomechanics Group (ORB), Heidelberg University, Germany, receiving her degree in 2017. She was postdoc first at ORB, Heidelberg University, then at the Dynamic Interaction Control group (DIC), Fondazione Istituto Italiano di Tecnologia (IIT), in Italy. Between 2018 and 2020 she was a JSPS (Japan Society for the Promotion of Science) fellow at the National Institute of Advanced Industrial Science and Technologies (AIST) in Japan, as a member of the CNRS-AIST JRL (Joint Robotics Laboratory), IRL. She is currently assistant professor at the Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology. Her research interests focus mainly on physical human-robot interaction, optimal control, human motion analysis, and humanoid robots.

 

2020-12-17 9:00~10:30 Online

Dr. N. Kuppuswamy (Toyota Research Institue, USA)

Manipulation's not that 'hard': Soft-bubble grippers for robust, and perceptive manipulation

Abstract: Manipulation in cluttered environments like homes requires stable grasps, precise placement, sensitivity to, and robustness against unexpected contact. In this talk I will introduce some recent progress at the Toyota Research Institute (TRI) in tackling these hard challenges through the Soft-Bubble grippers. The gripper system which combines a highly compliant gripping surface with dense-geometry visuotactile sensing, and is capable of multiple kinds of tactile perception. I will first discuss various mechanical design advances that enable realization of a soft-tactile sensor in the scale of domestic objects and a fabrication technique to deposit custom patterns to the internal surface of the sensor that enable tracking of shear-induced displacement of the manipulated object. I will then outline some of the advanced perception capabilities of the system enabled through its ability to capture multimodal tactile information - tactile classification, in-hand pose estimation, and shear force detection. Lastly, I will lastly describe experiments demonstrating the gripper's utility in several kinds of tasks in a cluttered home setting and discuss ongoing and future work.

Biography: Naveen Kuppuswamy is a Senior Research Scientist and Tactile Perception and Control Lead at the Toyota Research Institute in Cambridge MA, USA. He received a Bachelor of Engineering at Anna University, Chennai, India, MS in Electrical Engineering at the Korea Advanced Institute for Science and Technology (KAIST), Daejon, South Korea and a PhD in Artificial Intelligence at the University of Zurich, Switzerland. Since graduation, he has also spent some time as a Postdoctoral Fellow at the Italian Institute of Technology, Genova, Italy and as a Visiting Scientist with the Robotics and Perception Group at the University of Zurich. His current research interests lie at the intersection of robust manipulation, soft robotics, tactile sensing and contact mechanics.

Dominique Deuff

2020-9-28 17:00~18:30 in GVLab & Online

Dr. D. Deuff (Orange Labs, France)

Agile and SCRUM for a robot design project

Abstract: Appeared around the 2000s, agile methods are iterative approaches to manage project. They emerged to respond to a problem related to the black box phenomenon linked to the sequential V or waterfall models. Iterative and incremental production cycles have existed since the 1930s and 1940s, but it was, following the agile manifesto signed in 2001 by 17 software development experts, that agile methods really took off in companies. The talk will introduce the philosophy underlying agile methods. Then Scrum which is the most popular and used method in companies will in particularly be detailed, introducing practices that characterise these approaches.

Biography: Dominique Deuff is a researcher with Orange Labs in France. She received the engineering degree in digital imaging from the University of Rennes 1, in France, in 1997. She graduated with a PhD in computer science in 2003 from the University of Rennes 1. For 2½ years she worked at the National Institute of Informatics in Japan as a post-doctorate researcher. She came back to France in June 2006 to join Orange Labs as a developer. In 2008, she graduated with a Master’s degree in ergonomics and applied her new skills to various projects at Orange. Since 2018, she is a PhD student of university of Nantes, in ergonomics and design regarding social robotics at home.

Urakami

2020-6-29 17:00~18:30 in GVLab & Online

Dr. J. Urakami (Tokyo Institute of Technology, Japan)

Social Robotics: Should intelligent systems behave like people?

Abstract: People automatically and unconsciously react to robots in social terms, even if they believe that this is not useful. Nevertheless, it is still unclear whether the similarity of a robot or intelligent system to a human is really an advantage or disadvantage for human-robot interaction. The talk will present some recent research on how the expression of emotions and empathy of intelligent systems affects user attitudes and the evaluation of such systems. I will also discuss the aspect of how to give robots a certain personality and how this could trigger or lead to the development of bias.

Biography: Jacqueline Urakami is currently an Assistant Professor at the Department of Industrial Engineering and Economics at Tokyo Institute of Technology. After she received a Ph.D. from Chemnitz University (Germany), she conducted research in Eye tracking and Display Design at the Faculty of Environmental Information at Keio University (Japan) as a Humboldt Fellow. Jacqueline Urakami is currently an Assistant Professor at Tokyo Institute of Technology. In her research, Jacqueline Urakami looks at social aspects of people’s interaction with intelligent agents and robots applying psychological concepts and models such as cross-cultural communication, empathy, and emotions to study these interactions.

Nunez

2020-3-17 14:00~15:30 in GVLab

E. Nunez (Tsukuba University, Japan)

Design of robotic devices to mediate human-human interaction

Abstract: My research interest is to design interfaces that mediate and support human-human interaction. I believe that human interaction problems can be generalized to a “lack of information.” Many different designs can be proposed to bridge the gap and facilitate social interaction. During my talk, I would like to introduce our solution, which mostly consists of robotic devices that support touch-based interaction and connectivity among users. I will briefly describe the design and evaluation method, which involves the use of sensor data to describe users’ behavior and experience.

Biography: Eleuda Nunez is a postdoc researcher in the Artificial Intelligence Laboratory, University of Tsukuba. Currently, she is part of the CREST project: Social Signaling. Her favorite activities involve to design and develop devices and then study how they affect people's social interaction, for better or worse.


Beatriz Aoki

2020-2-17 14:00~15:30 in GVLab

B. Aoki (Pontifical Catholic University of São Paulo, Brazil)

Affective robotics and the human-machine stereotype in contemporary Japan

Abstract: The relationship between humans and other human-made entities (such as statues, dolls, puppets and, in more recent days, robots) has for long been a theme of interest and discussion, from science fiction productions to academic research. In this talk, we’ll bring to discussion human-non-human affective bonds, considering historical-cultural-religious views, as well as possible ethical implications and the power relations that emerge from these interactions. While focusing more specifically in the Japanese context of social robots, it is important to consider the context in which these relations constitute and the developing market on robots made for company or love. With such scenario in mind, how can we define what it means to be a person? And how can the interaction with social robots affect the way we relate to each other?

Biography: Beatriz Aoki is a PhD student in Communication and Semiotic Studies at Pontifical Catholic University of São Paulo, Brazil, and currently an exchange research student at the Philosophy program in University of Tsukuba, through a scholarship provided by the Brazilian government (CAPES-PSDE). Her current research interests include human-robot interaction and subsequent affective relationships and emotional bonds between them, specially concerning Japanese society, as well as the political implications of this discussion.

Dana

2019-12-2 15:00~16:30 in GVLab

Prof. D. Kulic (Monash University, Australia)

The role(s) of robots in education

Abstract: Recently, many researchers have explored the use of robots in educational settings, to enhance education, provide personalisation, and increase student engagement. In this talk, I will describe our recent work investigating the use of social robots in educational settings. What should be the role of the robot? How should it integrate with existing educational materials and other technologies? And how does the robot influence students' experience? I will describe our experiments and findings with different robot implementations and user groups, highlighting both promising initial results and current limitations and open problems.

Biography: Prof. Dana Kulić conducts research in robotics and human-robot interaction (HRI), and develops autonomous systems that can operate in concert with humans, using natural and intuitive interaction strategies while learning from user feedback to improve and individualize operation over long-term use.  She received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2006 to 2009, Dr. Kulić was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan. In 2009, Dr. Kulić established the Adaptive System Laboratory at the University of Waterloo, Canada, conducting research in human robot interaction, human motion analysis for rehabilitation and humanoid robotics.  Since 2019, Dr. Kulić is a professor and director of Monash Robotics at Monash University, Australia.

Dan Lofaro

2019-11-26 15:00~16:30 in GVLab

Prof. D. M. Lofaro (George Mason University, USA)

Robots in the Real-World

Abstract: This seminar focuses on the overarching topic of robots in the real-world. Special emphasis will be given to DARPA Robotics Challenge as well as humanoid, legged, and swarm robotics. Examples of real-world co-robot and robot only tasks will be given and explained. Specifically, these examples will include our work in co-robotics with adult-size and child-size humanoid robots such as the Hubo, DARWiN-OP, Nao, Meka, and the MDS robots. Additionally, examples of our swarm robotics research with the Lighter-than-air Autonomous Agents (LTA3) will be given. Also, a brief overview of how robotics and AI are shaping our world, society, and politics will be given.

Biography: Prof. Daniel M. Lofaro Ph.D is currently an Assistant Professor in the Department of Electrical and Computer Engineering at George Mason University.  He is an affiliate faculty at the U.S. Naval Research Laboratory (NRL) in the Navy Center for Applied Research in Artificial Intelligence (NCARAI) within the Laboratory for Autonomous Systems Research (LASR).  Lofaro is also the director of Lofaro Labs Robotics which is apart of the international laboratory group called the DASL Autonomous Systems Lab Group (DASL Group). An NSF-EAPSI and ONR-SFRP Fellow, he received his doctorate from the ECE Department at Drexel University in 2013 under the guidance of Dr. Paul Oh. He was the Research Lead of the DARPA Robotics Challenge team DRC-Hubo from 2012 to 2014. His research focus is in the overarching field of real-world robotics. Within this his research interests include Swarm Robotics, Emergent Behaviors, Real-World Human/Robot Interaction, and Humanoid Robotics.

Tilman

2019-10-7 15:00~16:30 in GVLab

T. Hartwig (Institute for Physics of Intelligence, University of Tokyo, Tokyo, Japan)

Machine Learning for Classification of Astronomical Data 

Abstract: I will present decision trees as very efficient machine learning method to classify astronomical data. A labelled training sample is split according to available features by requiring that each split minimises the information entropy of the assigned classes. This elegant mathematical formulation allows us to construct decision trees with supervised learning, which can then be applied to classify new observations. Eventually, I will present recent results of my own research: by classifying the chemical abundance patterns of metal-poor stars in the Milky Way, we can derive the multiplicity of the first generation of stars in the Universe. Furthermore, this approach provides the feature importance to identify crucial chemical elements to classify metal-poor stars, which can be used to optimise future spectroscopic surveys of Milky Way stars.

Biography: Dr. Tilman Hartwig is an assistant professor at the Institute for Physics of Intelligence. He received his Master degree from the University of Heidelberg in 2014. After an internship at the University of Texas at Austin, he started his PhD at the Institute d'Astrophysique de Paris. He obtained his PhD two years ago and continued as JSPS Postdoctoral Fellow at the University of Tokyo to study the formation of the first stars in the Universe. End of last year, he was promoted to become Assistant Professor at the "Institute for Physics of Intelligence" within the University of Tokyo. This new, interdisciplinary institute investigates the benefits of machine learning and artificial intelligence for different fields of science. Besides his research, Tilman Hartwig is also engaged in several public outreach activities in West Africa.

 

2019-7-8 10:00~11:30 in GVLab

I. Farkhatdinov (Queen Mary University of London, UK)

Interactive Robotics for Human Assistance, Telerobotics and Virtual Reality    

Abstract: In this talk I will provide an overview of my research on human-robot physical interaction with emphasis on assistive technologies. In particular, I will share recent results on assisting human gait and balance with walking robotic exoskeletons, assistive wheelchair applications and interactive robotic interfaces for navigation in virtual reality. The talk will also include an overview of several student projects I run at Queen Mary University of London.

Biography: Dr Ildar Farkhatdinov is an Assistant Professor at the School of Electrical Engineering and Computer Science at Queen Mary University of London (QMUL) and an Honorary Lecturer at the Department of Bioengineering of Imperial College London, United Kingdom. Before joining QMUL he was a research associate at Imperial College of London where he was involved in human-robot interaction research in European projects BALANCE, SYMBITRON and COGIMON. He got his PhD University Pierre and Marie Curie, (Paris Sorbonne) in 2013 (Paris, France), MSc in Mechanical Engineering from KoreaTech in 2008 (Cheonan, South Korea), and BSc in Automation and Control from Moscow State University of Technology STANKIN in 2006 (Russia). His primary research interests are in the field of human-robot/computer interaction with applications to assistive technologies, telerobotics and virtual reality.

Liming Chen

2019-6-3 15:30~16:30 in GVLab

L. Chen (Ecole Centrale de Lyon, France)

Giving eyes and intelligence to grasping robots

Abstract: The skill of grasping objects is a major human dexterity. However, despite years of research, grasping objects by robots , i.e., robotic grasping, is still problematic as current robots are still unable to automatically understand the scene, locate the objects, determine the grasp parameters, e.g., opening size of the gripper, the force to be applied, etc. In this talk, I am giving an overview of our recent research work, e.g., object instance segmentation for scene understanding, grasp position prediction, mainly based on deep machine learning through simulated data, to endow grasping robots with human vision capabilities.

Biography: Liming Chen is a Professor in the Department of Mathematics and Computer Science, Ecole Centrale de Lyon, University of Lyon, France. He received his BSc in Mathematics and Computer Science from the University of Nantes in 1984, his MSc and PhD in computer science from the University Pierre and Marie Curie Paris 6 in 1986 and 1989 respectively. He was an associate professor at the Université de Technologie de Compiègne before he joined Ecole Centrale de Lyon as Professor in 1998. He served as the Chief Scientific Officer in the Paris-based company Avivias from 2001 to 2003, and the scientific multimedia expert in France Telecom R&D China in 2005. He was the head of the Department of Mathematics and Computer science from 2007 through 2016. His current research interests include computer vision, machine learning, image and video analysis and categorization, face analysis and recognition, and affective computing. Liming has over 250 publications and successfully supervised 40 PhD students. He has been a grant holder for a number of research grants from EU FP program, French research funding bodies and local government departments. Liming has so far guest-edited 3 journal special issues. He is an associate editor for Eurasip Journal on Image and Video Processing and a senior IEEE member.

Randa Mallat

2019-4-22 16:00~17:00 in GVLab

R. Mallat (Univ. Paris-Est Creteil, France)

Toward an Affordable Multi-Modal Motion Capture System Framework for Human Kinematics and Kinetics Assessment 

Abstract: The accurate quantification of human motor act is a major concern in many applications including rehabilitation, robotics, sport and industrial ergonomics. In order for a human motion capture system to be suitable for “outside-laboratory” applications, such as clinical and home applications, it should be simple-to-use, transportable, affordable and small enough to not impact patient's motion while being accurate. In this presentation, we propose a new very affordable and user-friendly motion capture system by combining measurements from low-cost Inertial Measurement Units (IMU) and a set of camera-tracked Augmented Reality (AR) markers into a multi-modal Extended Kalman Filter based on the biomechanical model of the investigated segments. The system has been experimentally tested to perform the dynamic identification of a human-exoskeleton lower-limbs model as well as to reproduce human upper limbs motions during different daily rehabilitation tasks. The assessment results with a gold standard stereophotogrammetric system showed the ability of the proposed low-cost system to estimate more accurately than state-of-the-art body joint angles.

Biography: Randa Mallat is currently a Ph.D. student within the Laboratory of Image, Signal and Intelligent Systems, University of Paris-Est, France. During 2019, she is a research student at the University of Tokyo Agriculture and Technology in Tokyo, Japan. Her current research interests include motion analysis, geometrical calibration and dynamic identification of a human-robot system using affordable sensors. She received a Bachelor degree in computer and communication engineering in 2017 from the Lebanese University, Faculty of Engineering. In parallel, in 2017, she obtained her first-ranked master’s degree in information technology, intelligence and control of cyber-physical systems from the Lebanese University in collaboration with the University of Paris-Est, France. She received the first prize in the Order of Engineers competition between the final year projects of universities in North Lebanon.

Anup Nandy

2019-3-6 16:00~17:30 in GVLab

Prof. A. Nandy (NIT Rourkela, India)

A Multi-modal Gait Analysis System for Personal and Daily Living Care

Abstract: General tools for gait analysis are 3-D motion capture systems, force plates, pressure mats etc. 3-D motion capture (Mocap) systems can track body joint movements with high accuracy. The main drawback of Mocap systems are they are highly expensive and difficult to setup. Pressure mats and force plates are inexpensive, but they can only measure foot placement-based parameters and are unable to track other body parts’ movements or joint angles. Thus making these devices accessible only in clinical environment, but not for personal care. The objective of this talk is to discuss how to build an affordable efficient gait analysis system that can be used even in daily basis. A regular check on gait pattern may help patients during rehabilitation or identifying the cause if any abnormality is present. Recently, among vision sensors Microsoft Kinect has emerged as a possible cost-effective gait analysis tool with some limitations. Also, wearable sensors such as IMU (Inertial Measurement Unit), EMG (Electromyography) can be used to measure different spatio-temporal features of a gait pattern. The multi-modal gait analysis system has two major components: vision sensors such as Kinects and wearable sensors such as IMU, EMG, and EEG (electroencephalography) sensors. Multiple Kinects are more effective to capture the joint position data from different angles and then fused to measure the actual joint angles i.e. hip angle, knee angle, ankle angle. High-density electroencephalography (EEG) can provide a perception of human brain function during real-world activities with different gait patterns. Understanding the human psychology and estimating the mental state via gait analysis is a new area of research in the field of cognitive science. The current research evaluates the validity of the EEG signals measured during walking and correlate the results obtained with the state-of-the-art studies to find a relationship between gait and cognitive mechanism. Hence, probable use of this system in medical applications are as follows –Parkinson’s seizure detection tool, Stroke Detection tool, Cerebral – Cortiacle related disease prediction tool, Neurorehabiltation using Machine Learning and Artificial Intelligence.

Biography: Dr. Anup Nandy is working as an Assistant Professor in Computer Science and Engineering Department at NIT Rourkela. He earned his Ph.D from IIIT Allahabad in 2016. His research interest includes Machine Learning, Image Processing, Human Cognition, Robotics, Human Gait Analysis. He received Early Career Research Award from SERB, Govt. Of India in 2017 for conducting research on “Human Cognitive State Estimation through Multi-modal Gait Analysis”. He also received research funding for Indo- Japanese joint research project, funded by DST, Govt. Of India. He recently received NVIDIA GPU GRANT AWARD in 2018 for his research on Human Gait Analysis for Abnormality Detection.

   
Wael Suleiman

2019-1-28 10:30~12:00 in L1113

Prof. W. Suleiman (Sherbrooke University, Canada)

Optimization and Imitation Problems for Humanoid Robots

Abstract: The ability of humanoid robots to execute complex tasks is increasing rapidly. The latest trends in humanoid research are to increase their autonomy as well as improving their stability and the smoothness of their motions. A key to this end is by optimizing a set of appropriate objective functions that are often task related. In this talk, I will first overview the general optimization problem of a nonlinear function subject to nonlinear constraints, I will next describe how to apply that formulation to three case studies: I)-minimizing the exerted torques and at the same time improving the stability of a humanoid robot during the execution of a motion, II)-imitating human capture motions by a humanoid robot, III)- the time parameterization problem of a humanoid robot path. Experimental validations of the proposed approaches on the humanoid robot HRP-2 will be shown and analyzed.

Biography: Wael Suleiman received the Master’s and Ph.D. degrees in automatic control from Paul Sabatier University, Toulouse, France, in 2004 and 2008, respectively.  He is currently Associate Professor in Electrical and Computer Engineering Department, Faculty of Engineering, University of Sherbrooke, Sherbrooke, Canada. His research interests include humanoid and collaborative industrial robots control, motion planning and optimization.
Dr. Suleiman was a Postdoctoral Fellow of the Japan Society for the Promotion of Science (JSPS) at JRL, AIST, Tsukuba, from 2008 to 2010.

Giard

2019-1-17 14:30~16:00 in GVLab

Dr. A. Giard (Paris Nanterre University, France)

From Empty Dolls to Useless Robot: the Logic underlying Japanese Anthropomorphic Technologies

Abstract: Most of the robots are developed with the aim of helping people by taking over the Dull, Dirty and Dangerous tasks, known as the “3 Ds of robotization”. Soon it will be the 4 D, if we add the Dear tasks, with the development of social robots designed to create social bonding and display emotional reactions. Such social robots are usually designed to look like humans, to speak, to learn, to move their eyes, to play chess, etc, in order to optimize their so-called “efficiency” as social tools, which means that in the end robots should be able to do what humans do, and of course, do it better…
In Japan, a lot of social robots are not made to realistically mimicry humans. On the contrary. In fact, a lot of relational artifacts – downloadable girlfriend, holographic spouses, love dolls as well as robots – are made to look like useless toys, childish objects or stupid gadgets. This makes them particularly precious in human society, especially, nowadays, in the ideological context of the empowered woman and the performant man.
I would like to talk about the japanese technological choices from my own specific field of research, which is the market of love dolls, often designed to look dumb. Why do they have empty faces and pets’ names? Is it because Japanese customers are “lacking some sort of emotional connection in their life” as some media state it? Or because dolls’ users project their male sexual insecurity onto a pathetic female body? Exploring the reason why people engage in affective relationships with non-humans that are openly made to seem handicapped, unreliable or vulnerable should shed light on our own framework of thinking.

Biography: Agnes Giard is an anthropologist, working on the industry of  human affective surrogates (low -middle - high-tech) in the context of Japanese national depopulation. Her research tackle the consumption of emotional commodities – such as digital lovers or VR spouse – and the stigma attached to o-hitori-sama, single people who are held responsible for the falling birthrate.  Her last book – Un Désir d’Humain – received a distinction from the ICAS Book Prize : it was selected as one of the « 5 best books published in France in the field of Asian Studies » in 2017. This book relates to “artificial life” systems, i.e. love dolls, framed into a symbolic system dealing with failure, lack and loss. She is associate researcher at Sophiapol, a laboratory dedicated to the socio-anthropology of emotion and social exclusion, at Paris Nanterre University.

Dana

2018-12-4 10:30~12:00 in L1113

Prof. D. Kulic (Waterloo Universityo, Canada)

Title:  Inverse Optimal Control for Modeling and Understanding Human Movement

Abstract:  The human body is capable of a wide range of agile, dexterous and complex movement that is also energy efficient.  It is hypothesized that the central nervous system generates movement by optimizing certain criteria, which may be task and context dependent.  In this talk, I will review the techniques of inverse optimal control, a methodology for estimating the controller objective function given observations of the system trajectory.  I will next describe our recent work on applying inverse optimal control to elucidate the control objectives of human movement.  I will describe a computational framework for extracting the control objectives that is capable of handling incomplete observations and time-varying control objectives.  The approach will be illustrated on a variety of human motion datasets.

Biography: Dana KULIC (クリチ ダナ) received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2002 to 2006, Dr. Kulic worked with Dr. Elizabeth Croft as a Ph. D. student and a post-doctoral researcher at the CARIS Lab at the University of British Columbia, developing human-robot interaction strategies to quantify and maximize safety during the interaction. From 2006 to 2009, Dr. Kulic was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan, working on algorithms for incremental learning of human motion patterns for humanoid robots. Dr. Kulic is currently an Associate Professor at the Electrical and Computer Engineering Department at the University of Waterloo, Canada. Her research interests include robot learning, humanoid robots, human-robot interaction and mechatronics.

   
Sung Park

2018-11-26 15:00~16:00 in GVLab

Dr. S. Park (Tokyo Institute of Technology, Japan)

Affective interaction in AI devices and services

Abstract: Newer voice-interaction system such as Amazon Echo and Google Home has been introduced, providing voice experience from a dedicated device which demands a different kind of UX design. Voice interaction is inherently social because of its human-like characteristics and would elicit social responses from the user. Consequentially design methods and processes should be refined to achieve cohesive and effective voice experience. In this talk, I will outline the fundamentals of Voice UX design and identify the opportunities presented by the social and affective characteristics of voice modality. Examples would be drawn from the design of SKT Nugu and social robot Vyo.

Biography: Sung Park values rigorous psychological science that can provide foundational guidance to design. He has a lifelong quest to design useful and usable intelligent systems and thereby contribute to their mass adoption. During his tenure at Samsung Electronics, Sung led the interaction and ergonomic design of Samsung's first generation medical devices including the UGEO ultrasound and XGEO X-ray systems, which the design of latter won the IDEA Gold award and contributed to their successful market penetration. He then led the UX Engineering Group, responsible for renovating Samsung's design methods and processes. Most recently, at SK Telecomm, Sung led the UX design of the social robot Vyo and the voice recognition speaker Nugu, which the voice interaction of the latter contributed to the success of the product. He is now a research fellow at Tokyo Institute of Technology with a research interest in empathic interaction with AI devices and services.

MLB

2018-10-25 15:00~16:00 seminar in GVLab

Prof. M.-L. Bourguet (Queen Mary University London, UK)

Can robots be good teachers?

Abstract:  Replicating a good teacher’s skills is a challenge, but when it comes to holding students’ attention, social robots (real and virtual) can be an advantage as they can be made to look playful and non-judgmental. They are not expected to display the complex but not always useful behaviour of a human instructor, so their gestures and postures can be optimised to sustain students’ attention and learning. Even more importantly, they can be programmed to react to students’ behaviour in real time and thus improve engagement and adapt to the students’ needs and demand. But numerous questions remain unanswered. What relationships exist between a lecturer’s gestures and their pedagogical intentions? What social signals should embodied pedagogical agents display? What type of avatar’s behaviour best fosters students’ satisfaction and understanding of a lecture? Can robots replace teachers?

Biography: Marie-Luce Bourguet is Senior Lecturer in Computer Science at Queen Mary University of London (QMUL), UK and Senior Fellow of the Higher Education Academy. She leads the Joint Programme on Multimedia between QMUL and the Beijing University of Posts and Telecommunications (BUPT) in China. She received her MSc and PhD in Signal Processing from the Grenoble Institute of Technology. She was a researcher at the Secom Intelligent Systems Laboratory in Tokyo between 1993 and 1995 and a research fellow from the European Commission and the Japan Science and Technology Agency at NHK Science and Technology Laboratory between 1996 and 1998, also in Tokyo. Her research interests include multimodal signal processing, human-computer and human-robot interaction, interface aesthetics and pedagogical agents.

Hiroyasu Miwa

Marketta Niemela

2018-7-4 15:00~17:00 double seminar in Ellipse 3F Hall

Dr. H. Miwa (National Institute of Advanced Industrial Science and Technolog, Japan)

Measurement of Nursing-care Service Processes

The nursing-care service, which supports elderly people with disabilities in daily life, is one of the important services in aging society. But, it is difficult for them to record their daily operations or service process during work because of their business. Therefore, we have been developing technologies that could record their daily operation or what they noticed about elderly people easily.
Then, we collaborated with nursing-care facilities and measured their service processes and quality using time and motion study. It is expected to be helpful to design and improve their service processes, new equipment and education programs.

Hiroyasu Miwa, PhD. in Engineering, was a research associate at Waseda University (2003-2005), and has been a senior research scientist at National Institute of Advanced Industrial Science and Technology (AIST) since 2005. His current research interest is measurement and modeling of service process and human function. As the research on service process modeling, he studied and modeled elderly care service system in Japan and Finland as a part of METESE project through the measurement of service process. As the research on human function modeling, he also analyzed human swallowing motion with EMG and sound signal.

Dr. Marketta Niemelä is a Senior Scientist at VTT Technical Research Centre of Finland

Field studies with social robots: Experiences from shopping mall and elderly care

Social robots are currently being introduced in many arenas of society, for instance, in elderly care and in public places such as shopping malls. People often do not know what to expect from such robots, and robots can also raise negative thoughts and even fears. Field trials give insight about the expectations and feelings of potential users of social robots, and in particular, what kind of challenges there are when social robots are wanted to be integrated to daily services for people.

Dr. Marketta Niemelä is a Senior Scientist at VTT Technical Research Centre of Finland. Her research focuses on human-driven design of robot-based services, social and service robots in the society and the challenges of integrating robotics into service systems. Her research group works in three large-scale national/international collaborative projects: Robots and the Future of Welfare Services (ROSE), Meaningful Technologies for Seniors (METESE) and Multi-Modal Mall Entertainment Robot (MuMMER). She has contributed a number of conference and journal papers in the field of social and service robots, user-centred design and ethics of future ICTs. She is educated in psychology (Ps.M.) and had her doctoral dissertation (2003) of Human-Computer Interaction in information systems and computer science in the University of Jyväskylä, Finland.

   
Lorenzo Jamone

2018-5-7 14:30~15:30 in Ellipse 3F Hall
Prof. L. Jamone (Queen Mary University of London, UK)
Biological inspiration and data driven learning for a future generation of intelligent robots

This seminar is part of the Japanese council of IFToMM Robotics Seminars

The robots of today are mainly employed in heavy manufacturing industries (e.g. automotive): these are big robotic manipulators which perform simple and repetitive tasks in very structured environments, with high speed and accuracy, in areas of the factory where humans have no access for safaty reasons. The robots of the future will be different. They will perform more complex tasks in more complex unstructured environments, even in collaboration with humans. They will be more intelligent machines. How will this be achieved? Explicit insights from biology, advanced machine learning and AI techniques, well established control and engineering principles, have to be combined and properly integrated. In the talk I will report the main outcomes of the research I conducted during the past 12 years: humanoid robots that are able to learn representations of their own bodies and of the external environment through interactive exploration, and that can eventually display robust problem solving capabilities in unstructured settings.

Lorenzo Jamone is a Lecturer in Robotics at the Queen Mary University of London (UK). He received his MS in Computer Engineering from the University of Genoa in 2006 (with honors), and his PhD in Humanoid Technologies from the University of Genoa and the Italian Institute of Technology (IIT) in 2010. He was Associate Researcher at the Takanishi Laboratory in Waseda University from 2010 to 2012, and Associate Researcher at VisLab (Instituto Superior Tecnico, Lisbon, Portugal) from 2012 to 2016. His research interests include cognitive humanoid robots, motor learning and control, force and tactile sensing

Kristiina Jokinen

2018-4-9 14:30~15:30 in room 9-452
Prof. K. Jokinen (AIRC, AIST-Waterfront, Tokyo, Japan)
Conversing with Social Robots - issues and challenges in natural human-robot dialogues

Social robots acting and interacting in the physical world and providing information to the human users also need language communication capabilities so they can interact with the users in a natural manner. In this talk I will focus on issues related to dialogue modelling that enables language interaction between users and social robots. In particular, I will discuss dialogue design that takes into account multimodal nature of interaction (speech, gaze, gestures) as well as the knowledge of the world and human activities, so as to enable reasoning and interaction between human users and robot agents. Examples are drawn from our work to create a framework for automated social agents that can assist human care-takers in their collaborative activities in service industries such as nursing, caregiving, and education.

Kristiina Jokinen is Senior Researcher at AI Research Center at AIST Tokyo Waterfront. Before joining AIRC, she was Professor and Project Manager at University of Helsinki and at University of Tartu. She received her PhD from UMIST, Manchester, and was awarded a JSPS Fellowship to research at NAIST, Japan. She was Invited Researcher at ATR Research Labs in Kyoto, and Visiting Professor at Doshisha University in Kyoto in 2009-2010. She was Nokia Foundation Fellow in Stanford in 2006, and she is Life Member of Clare Hall at University of Cambridge. Her research focuses on spoken dialogue systems, corpus analysis, and cooperative and multimodal human-robot communication. She has widely published on these topics, including three books. Together with G.Wilcock she developed the WikiTalk open-domain dialogue application for social robots, which one Special Recognition for Best Robot Design (Software Cateogry) at the International Conference of Social Robotics 2017. She has had a leading role in multiple national and international cooperation projects. She served as General Chair for SIGDial 2017 and ICMI 2013, Area Chair for Interspeech 2017 and COLING 2014, and organised the northernmost dialogue conference IWSDS 2016 in Lapland and edited the Springer book "Dialogues with Social Robots" (LNEE 427).

Florentim 2018-3-9 10:30~11:30 in room 9-452
Prof. F. Wörgötter (Georg-August-Universität Göttingen, Germany)
Helping a Robot to Understand Human Actions and Objects

Humans are able to perform a wide variety of complex actions manipulating a very large number of objects. We can make predictions on the outcome of our actions and on how to use different objects. Hence, we have excellent action&object understanding. Artificial agents, on the other hand, still miserably fail in this respect. It is particularly puzzling how inexperienced, young humans can acquire such knowledge; bootstrapped by exploration and extended by supervision. In this study we have, therefore, addressed the question how to structure the realm of actions and objects into dynamic representations, which allow for the learning of different action and object concepts. The central idea behind this is to adopt a grammatical view onto actions as well as objects and represent them using different syntactical elements. Performing a variety of manipulation actions on a table top (e.g. the actions of “making a breakfast”), we show that this will indeed lead to some kind of implicit (un-reflected) understanding of action and object concepts allowing an agent to generalize actions and redefine object-uses according to need.

Florentin Wörgötter studied biology and mathematics at the University of Düsseldorf, Germany. He received a Ph.D. degree, studying the visual cortex, from the University of Essen, Germany, in 1988. From 1988 to 1990, he did research in computational neuroscience at the California Institute of Technology, Pasadena. He became a Researcher with the University of Bochum, Germany, in 1990, where he was investigating experimental and computational neuroscience of the visual system. From 2000 to 2005, he was a Professor for computational neuroscience with the Psychology Department, University of Stirling, U.K., where his interests strongly turned towards “Learning and Adaptive Artificial Systems” Since July 2005, he has been the Head of the Computational Neuroscience Department at the Bernstein Center for Computational Neuroscience, Inst. Physics 3, University of Göttingen, Germany. His current research interests include information processing in closed-loop perception–action systems (animals, robots), sensory processing (vision), motor control, and learning/plasticity, which are tested in different robotic implementations. This work has recently turned more and more towards issues of artificial cognition addressing problems of human action and object understanding and how to transfer this to machines.

Laumond 2018-1-23 10:30~12:00 in room L1113 GIR seminar
Prof. Jean-Paul Laumond (LAAS, France)
From the rolling car to the rolling man

This seminar exposes the mathematical notions underlying the differential equation modeling the motion of a wheel. I will focus on the concept of nonholonomy. We will see how the concept is efficient for mobile robot motion planning (how to park a car?), how it opers to better understand the human locomotion and how it suggest new mechanical design for humanoid robots.

Jean-Paul LAUMOND is a French roboticist. He is Directeur de Recherche at LAAS-CNRS (team Gepetto) in Toulouse, France. He received the M.S. degree in Mathematics, the Ph.D. in Robotics and the Habilitation from the University Paul Sabatier at Toulouse in 1976, 1984 and 1989 respectively. From 1976 to 1983 he was teacher in Mathematics. He joined CNRS in 1985. In Fall 1990 he has been invited senior scientist from Stanford University. He has been a member of the French Comité National de la Recherche Scientifique from 1991 to 1995. He has been a co-director of the French-Japanese lab JRL from 2005 to 2008. He has been coordinator of two the European Esprit projects PROMotion (Planning RObot Motion, 1992-1995) and MOLOG (Motion for Logistics, 1999 - 2002), both dedicated to robot motion planning and control. In 2001 and 2002 he created and managed Kineo CAM, a spin-off company from LAAS-CNRS devoted to develop and market motion planning technology. Kineo CAM was awarded the French Research Ministery prize for innovation and enterprise in 2000 and the third IEEE-IFR prize for Innovation and Entrepreneurship in Robotics and Automation in 2005. Siemens acquired Kineo CAM in 2012. In 2006, he launched the research team Gepetto dedicated to Human Motion studies along three perspectives: artificial motion for humanoid robots, virtual motion for digital actors and mannequins, and natural motions of human beings. He teaches Robotics at Ecole Normale Supérieure in Paris. He was the 2011-2012 recipient of the Chaire Innovation technologique Liliane Bettencourt at Collège de France in Paris. He is the 2016 recipient of the IEEE Inaba Technical Award for Innovation Leading to Production. He is a member of the French Academy of Technologies and of the French Academy of Science.

Ken Ohta 2018-1-19 15:00~16:30 in room 9-452
Dr. K. Ohta (NTT Communication, Japan)
スポーツにおける運動パターン形成

多くのスポーツではスピードなどの記録を競う競技が多いが,その運動パターンは身体や相互作用する環境とののダイナミクスが強く拘束している.もちろん生理学的にも神経科学的にも運動自体は拘束されているが,運動パターン形成に関してはどうもヒトは効率の悪い運動をすることが苦手で,力学の支配からは逃げられないようである.この発表では,ハンマー投,ゴルフ,馬の歩容などの運動パターン形成の例を取り上げ,最後にサイバネティックトレーニングという考え方をご紹介する.

オプティトラック・ジャパン株式会社,主任研究員.NTTコミュニケーション科学基礎研究所,スポーツ脳科学プロジェクト,客員研究員.専門:スポーツバイオメカニクス,スポーツ工学..北海道大学理学部地球物理学科卒.筑波大学大学院体育科学で博士(体育科学)取得.その後,理化学研究所バイオミメティックコントロール研究センター,Max Planck Institute for Human Cognitive and Brain Sciencesにてポスドク.国立スポーツ科学センター契約研究員,慶應義塾大学SFC特任准教授,九州先端科学技術研究所研究員などを経て2017年4月より現職.

OcnarescuCossin 2017-11-20 15:00~16:30 in room 9-452
Dr. I. OCNARESCU and Dr. I. COSSIN (Strate design, France)
Design and social robots

Abstract: This talk presents the research activities of Strate Research the research and innovation department of Strate School of Design (France). We will focus on social robotics and more precisely we show how design changed the research process in a robotic consortium, Romeo project. Romeo is a humanoid robot that aims to be a companion for disable and older people in hospitals and nursing homes. From field explorations, designers imagine another robot solution, build prototypes and invent new ways to address users’ needs and expectations: intention scenarios. We present how these design interventions bring embodiment in early phases of SAR development, support the implementation of a comparative study on three robot representations, and open an in-situ discussion on human-robot interaction.

Biography: Ioana Ocnarescu (Ph.D. in Design Science) is a Design Researcher at Strate School of Design (Paris, France) in the Research and Innovation Department, where she coordinates Robotics by Design Lab. Her main research fields are: Social Robotics and SAR (Socially Assistive Robotics), Experience Design, Design Thinking and Design & Innovation. Her PhD entitled “Aesthetic Experience and Innovation Culture” shows how design creates an innovation culture but also memorable experiences for the researchers of a Research & Development Department (at Bell Labs France). At Strate she teaches two classes: Experience Design and Design & Robotics, master level. She is a design research leader in the Romeo2 project, where she manages the interconnections between medical team, users, designers and engineers.

She will be accompanied by Isabelle Cossin (Ph.D. in Art History on Animation Films) who is also a Researcher at Strate Research. She is also working on social robotics with a focus on robot mouvement and body representation.

  2017-10-26 10:30~11:30 in room 9-505
Dr. F. LOTTE (INRIA-Bordeaux, France)
Brain-Computer Interfaces technologies for the benefit of all: Neuroergonomics and Neuroeducation

Abstract: Brain-Computer Interfaces (BCIs) are systems that can translate the brain activity patterns of a user into messages or commands for an interactive application. Such brain activity is typically measured using Electroencephalography (EEG), before being processed and classified by the system. EEG-based BCIs have proven promising for a wide range of applications ranging from communication and control for motor impaired users, to gaming targeted at the general public, real-time mental state monitoring and stroke rehabilitation, to name a few. One of the objectives of team Potioc (https://team.inria.fr/potioc/) is to design reliable BCI technologies that can be used outside the lab for practical applications. To do so we work on EEG signal processing and classification, on BCI user training and on the exploration of various applications of BCI. In this talk I will notably present our work exploring BCIs applications that can potentially benefit everyone, notably in the areas of Neuroergonomics and Neuroeducation. Neuroergonomics consists in using brain signals to passively estimate some of the relevant user's mental states during human-computer interaction, in order to assess the ergonomic qualities of this interface. In particular, we showed that one can estimate mental workload during complex 3D manipulation and navigation tasks in order to assess or compare interaction techniques and devices. We have also been able to study stereoscopic displays by estimating visual comfort in EEG signals. Neuroeducation consists in using neurotechnologies and neuroscience knowledge in education. In this area we explored how BCIs can be used to teach people about EEG and the brain. To do so, we designed a number of devices based on augmented reality and/or tangible interfaces to enable novice users to visualize their own brain activity or mental states in real-time. I will conclude by discussing promising future areas of research in which BCIs can be used to adapt the content of teaching material to the learners' ongoing mental state.

Biography: Fabien Lotte holds a PhD in computer sciences from the National Institute of Applied Sciences (INSA) Rennes, France (2009). His PhD received both the PhD Thesis award from the French Association for Pattern Recognition, and the 2nd PhD Thesis award from the French Association for Information Sciences and Technologies. In 2009 and 2010, he was a research fellow at the Institute for Infocomm Research (I2R) in Singapore, working in the Brain-Computer Interface Laboratory. Since January 2011, he is a tenure Research Scientist at Inria Bordeaux Sud-Ouest, France, in team Potioc (http://team.inria.fr/potioc/). His research interests include Brain-Computer Interfaces (BCI), human-computer interaction and brain signal processing. In 2016, he was the recipient of the prestigious ERC Starting Grant for his research on BCI. He is part of the editorial boards of the journals Brain-Computer Interfaces and Journal of Neural Engineering.

Dana 2017-10-24 10:30~12:00 in room L1113 GIR seminar
Prof. D. KULIC (Waterloo University, Canada)
Human motion measurement and analysis for rehabilitation

Abstract: Human motion measurement and analysis is a challenging problem, due to issues such as sensor and measurement system limitations, high dimensionality, and spatial and temporal variability. Accurate and timely motion measurement and analysis enables many applications, including imitation learning for robotics, new input and interaction mechanisms for interactive environments, and automated rehabilitation monitoring and assessment. In this talk we will describe recent work in the Adaptive Systems Laboratory at the University of Waterloo developing techniques for  automated human motion measurement and analysis. We will overview techniques for motion measurement, segmentation, individualized model learning and analysis, with a focus on applications in rehabilitation, motor symptoms diagnosis and sports training.

Biography: Dana KULIC (クリチ ダナ) received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2002 to 2006, Dr. Kulic worked with Dr. Elizabeth Croft as a Ph. D. student and a post-doctoral researcher at the CARIS Lab at the University of British Columbia, developing human-robot interaction strategies to quantify and maximize safety during the interaction. From 2006 to 2009, Dr. Kulic was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan, working on algorithms for incremental learning of human motion patterns for humanoid robots. Dr. Kulic is currently an Associate Professor at the Electrical and Computer Engineering Department at the University of Waterloo, Canada. Her research interests include robot learning, humanoid robots, human-robot interaction and mechatronics.

AnnaK 2017-9-28 14:00~15:00 in room 9-452
A.K. Laboissiere (ENS, France & the Centre for Culture and Technology at Curtin University, Australia)
Lethally entangled : biodiversity repositories and toxic contexts

Abstract: My main focus in this presentation will be conservation practices based on the existence of biodiversity repositories, which can take different and often coexistent forms : gene banks, seeds banked at low temperatures, cryoconserved tissue and gametes from endangered species, herbaria and arboreta. These efforts race to bank biodiversity in a warming, toxic, or dying world, so that endangered species migth be redeemed into a fantasised future of environmental remediation. I will explore both the troubled relationship to time, transmission and generativity this kind of banking project exemplifies, and how living repositories such as arboreta might point towards more dynamic and problematic human-nonhuman communities and entanglements in a world reworked by capitalist and colonialist powers and fluxes.

Biography: Anna-Katharina Laboissière has obtained her BA and MA in philosophy from the Ecole Normale Supérieure and Université Paris Ouest and is currently a PhD candidate at the philosophy department of the Ecole Normale Supérieure and the Centre for Culture and Technology at Curtin University in Perth. She has worked as an assistant curator at the Fondation Cartier pour l'art contemporain in Paris and is an International Collaborator of the Leading Graduate Program in Primatology and Wildlife Science at Kyoto University.

Miyasaka 2017-7-21 14:00~15:00 in room 9-452
K. Miyasaka (Japan Educational Foundation & Keio Univ., Japan)
Can Robots Perform/Play Vogue Dance?  -- Anthropological Note on Meta-Communication and Robots

Abstract: Recently researchers in robotics and its related fields including Artificial Intelligence have revealed quite remarkable attainments: the news on robots who can imitate or preserve a particular Japanese traditional dance is such one. Given the reality of decreasing human bearers of dance tradition, robots are expected to play a carrier’ role for preserving and transmitting traditional dancing forms to the future human generations. From our average human eyes, robots may be able to be perceived to perform such dances, even to some artistically nuanced degree going beyond the level of some mechanical imitation, that is, to some good level of artistically appealing quality. However, when it comes to the issue on ritualized battle and dance as opposed to real fighting with relevance to understanding and appreciating human dance, there would appear a great abyss between human dance and that of robot. How can researchers go beyond this abyss? I would like to take up a topic of “vogue dance” which was popularized and widespread by Lady Madonna (Louise Ciccone). This dance was originated by actual battles between Houses of drag queens in New York, and was transformed into ritualized battle dance with a keyword “shade.” By introducing anthropological discussion on fighting and its ritualization, rites of reconciliation, with reference to Gregory Bateson’s theory on meta-communication, I would like to develop my discussion on “Can Robots Perform/Play Vogue Dance?” If one robot happens to slip down when it unexpectedly steps onto a peeled banana skin on a floor, then another robot would be able to be made to make a smile at that slipping scene. And, is it possible for us to think that robots can acquire the meta-communicational abilities?

Biography: K. Miyasaka, is a Principal Researcher at Japan Educational Foundation and an Professor Emeritus, Keio University, & Research Associate at Global Center for Research on Logic and Sensibilities. He has been engaged in anthropological research on relational intersections of cultures in comparative perspective in several themes such as transcultural reformation of shaman and ethnic artists in globalizing processes [e.g. “Unusual Gestures in Japanese Folkloristic Ritual Trance and Performances.” In M.Rector, I Poggi, & M.Trigo Eds. Gestures Meaning and Use, 2003:293-300.], re-articulation of traditional healing systems, fundamental issues and future scope of visual anthropology [e.g. co-authored in Japanese, Eizo-Jinruigaku, 2014], impact of cultural psychiatry on anthropological reconsideration of culture theory, discourses on interculturalism & multicultural theatre, and anthropological perspective of advanced technology and ecology of information.

steve heim 2017-7-7 10:30~11:30 in room 9-452
S. Heim (Max Planck Institute for Intelligent Systems, Germany)
Designing Dynamics for Legged Robots: Considering Behavior, Mechanics and Control Together

Abstract: To bring robotics out of the lab and into the real-world, they need to deal with an environment that is messy, complex and rapidly changing. While current robots still struggle greatly with energy-efficiency, versatility and robustness, animals are efficient, proficiently accomplish a variety of tasks in different environments and do all of this with grace. To achieve the same ability in robots, we need to stop thinking of this problem in the conventional manner: instead of building software and control on top of a given mechanical design, it is important to think of the tight coupling between behavior, mechanical design and controller design. I will talk about some past work which highlights the mechanical and controller design aspects in Cheetah-Cub-Tail, as well as intro- duce my current work looking at the coupling between morphology and learning controllers through reinforcement learning.

Biography: S. Heim completed his Bsc and Msc at ETH Zurich, starting in Mechanical Engineering and then specializing in Robotics, Systems and Control. During this time He spent a semester at TU Delft, and finished his master thesis at BioRob at EPFL working on how tails can be used in legged locomotion. Before joining the Dynamic Locomotion Group at Max-Planck Institute for Intelligent Systems, he spent a couple years in Prof. Ishiguro Lab at Tohoku University in Japan.

Dumouchel 2017-6-23 14:00~16:30 in room 9-452
Prof. P. Dumouchel (Ritsumeikan University, Japan)
Weird object that are not quite… Planewalkers and crossovers

Abstract: Weird object that are not quite… real? objects? Objects that do not quite exists? In this presentation, I want to enquire into the nature of strange objects which exist on more than one world, or perhaps better, one plane of existence. Particularly clear examples of such objects are “items” which we find in video games. That is: swords, magic potions or clothing that can be gained or bought within the game and that enhance a character’s power or appearance. Interestingly, these also exists in the “real world”, where there is a market for such objects on which they can be bought or sold using normal currency, and where they sometimes constitute occasions of violence or theft. Items are only one example of a growing class of weird objects which simultaneously exists in both the “real” and the “virtual” worlds. I want to inquire into this hybrid mode of existence and what it tells us about what is real.

Biography: P. Dumouchel is Canadian, Professor of philosophy at the Graduate School of Core Ethics and Frontier Sciences, Ritsumeikan University, Kyoto, Japan. He is co-author with Luisa Damiano of Vivre avec les robots essai sur l’empathie artificielle (Seuil, 2016), the English translation Living with Robots is coming out at Harvard University Press (Fall 2017) and is author or Emotions essai sur le corps et le social (Paris: Les Empêcheurs de Penser en rond, 1999) – He also co-edited Jean-Pierre Dupuy L’auto-organisation de la physique au politique (Paris: Seuil, 1983) With Reiko Gotoh he is co-editor of  Against Injustice The New Economics of Amartya Sen (Cambridge, 2009) and of Social Bonds as Freedom Revisiting the Dichotomy of the Universal and the Particular (Berghahn Books, 2015) – Together with L. Damiano and H. Lehmann they recently published “Artificial Empathy an Interdisciplinary  Investigation” in International Journal of Social Robotics, (2015) 7-1:3-5 and “Towards Human-Robots Affective  Co-evolution” in International Journal of Social Robotics (2015) 7-1:7-18; “Should Empathic Social Robots have Interiority?” in Social Robotics (S. Sam Ge, O. Khatib, J-J Cabibihan, R. Simmons & M-A. Williams, Eds), (Springer-Verlag: Berlin-Heidelberg, 2012), pp. 268-277; and with L. Damiano “Epigenetic embodiment” in (Cañamero L., Oudeyer P.-Y. & R. Balkenius eds,) Epigenetic Robotics, Lund University Cognitive Studies,(2009) 146: 41-48 and recently he published in French “La vie des robots et la nôtre” in Multitudes (2015) 58: 42-54.

Azuma 2017-5-24 15:00~16:30 in room 9-452
S. Azuma (Unagi travel, Japan)
Connecting the world through stuffed animals

Abstract: Unagi Travel is a travel agency for stuffed animals. Customers, the stuffed animals, come from all regions to Unagi Tavel's Tokyo office on a vacation. Travel photos and videos are shared live on Unagi Travel's Facebook/twitter for the owners to enjoy the travels together. Each customer has different reason for being on a travel. Many are repeating customers. Introducing its charms and the impacts that the stuffed animals bring to the humans.

Biography: S. Azuma received a BA from Sophia Univ., Japan and a MSc from Waseda Univ., Japan. She has pursued her career at Sanwa Bank (currently: The Bank of Tokyo-Mitsubishi UFJ) and Deutsche Bank. In 2010, she launched Unagi Travel, a travel agency for stuffed animals. Starting in Tokyo, she expanded her business to other regions. She also collaborates with other businesses and local governments to offer a variety of tour services such as internships and sports activities. Her vision is to provide adventures to everyone around the globe; kids, students, moms, dads, entrepreneurs, teachers, artists, physically-challenged, etc. and inspire them to trigger positive actions.

Bianchini 2017-3-16 14:00~15:00 in room 9-452
Prof. S. Bianchini (Ecole nationale superieure des arts decoratifs, France)
Animated Objects : Robotics and the Arts

Abstract: Alongside the continued success and growth of industrial and service robotics, leisure and personal robotics (robot assistants, family robots) with social, cultural, and sometimes artistic dimensions are now undergoing increasingly intense development. Unlike the industrial and service approaches, this type of robotics does not necessarily have utilitarian aims. On the contrary, the more it targets the personal sphere or the world of culture and art, the more its aesthetic or emotional dimensions take precedence over functionality. How to focuse on the human-robot relationship at the level of aesthetic experience, based primarily on emotions and empathy ? Breaking with any figurative approach – be it anthropomorphic (and therefore humanoid), zoomorphic, or more broadly biomorphic –, how to use movement as our material, to establish a human-robot relationship – or, more broadly, a human-robotic object relationship – that is fundamentally emotional ? How can we attribute a personality to these objects if they do not possess any distinctive traits from the world of living beings ? To stimulate a “human-object” relationship based primarily on emotions and empathy rather than on utilitarian end, to qualify and enrich the sensitive human-robotic artifact relationship, it seems important and even necessary to allow art and robotics to discuss and work together.

Biography: Samuel Bianchini is an artist and associate professor at the École Nationale Supérieure des Arts Décoratifs—Paris (EnsAD) / PSL Research University Paris. Supporting the principle of an “operational aesthetic,” he works on the relationship between the most forward-looking technological apparatus, new forms of aesthetic experiences, and sociopolitical organizations, often in collaboration with scientists and research laboratories. His works are regularly shown in Europe and across the world: Nuit Blanche Toronto 2016, Waterfall Gallery (New York), Medialab Prado (Madrid), Palais de Tokyo (Paris), Art Basel…

bipin 2017-1-6 14:30~15:30 in room 9-452
Prof. B. Indurkhya (Jagiellonian University, Poland)
Eliza effect and a therapeutic role for robots

This seminar is part of the Japanese council of IFToMM Robotics Seminars

Abstract: In late 1960, Joseph Weizenbaum authored a program called Eliza that simulated a therapist in carrying out a conversation with the user. The program did not understand anything, but relied on keyword matches, and a few simple heuristics to keep the flow of conversation. However, Weizenbaum was taken aback by the intensity of emotional attachment users felt towards this program, prompting him to highlight this negative aspect of technology in his thought provoking book "Computer Power and Human Reason". In recent years, however, there has been a revival of Eliza-like systems and interfaces in cognitive robotics. We will look at some such systems and argue that they often have a positive and therapeutic effect on the user, and that in some situations at least this kind of robot-human interaction transcends human-human interaction.

Biography: Bipin Indurkhya is a professor of Cognitive Science at the Jagiellonian University, Cracow, Poland. His main research interests are social robotics, usability engineering, affective computing and creativity. He received his Master’s degree in Electronics Engineering from the Philips International Institute, Eindhoven (The Netherlands) in 1981, and PhD in Computer Science from University of Massachusetts at Amherst in 1985. He has taught at various universities in the US, Japan, India, Germany and Poland; and has led national and international research projects with collaborations from companies like Xerox and Samsung.

Sarah 2016-11-18 14:30~15:30 in room 9-452
Dr. S. Cosentino (Waseda University, Japan)
Advanced human robot social interaction: goals, challenges and approaches

Abstract: Human robot interaction (HRI) is not only confined to particular applications for entertainment. In fact, in any situation in which the robot works in contact with humans, interaction and mutual understanding play a crucial role. Therefore we argue that the robot must be capable of learning how to interact with humans in a natural way, to be able to understand humans’ emotional state, as well as basic non-verbal communication signals. This, for example, will allow naïve users to interact with the robot immediately, under any circumstances, without extensive training. But, especially, this will give the robot the means to calibrate its response while interacting with humans during a specific task. HRI challenges can thus be seen as a combination of active perception and dynamic cognitive processing. How can we solve the challenges that this field presents?

Biography: Dr. Cosentino is a hands-on engineer and enthusiastic academic researcher. She started working as a freelance collaborator in an Electronics company during high-school, until earning her M.Sc. in Electronic engineering. Straight after her graduation she moved to Japan for a prestigious 1-year industrial internship program, which was extended to a full employee contract by the company for another 2 years. After 3 years in Japan, working for challenging projects in Electronics R&D, she decided it was time to change, applied and was selected for a scholarship program, and enrolled in a Ph.D. course in Waseda University. During her studies she collaborated with several other researchers across the globe, spending months in leading universities in U.S. and Europe. Her main interests are human physiology, human sensing, human communication, affective computing and human-machine interaction. She has hands-on experience in electronic design and assembly, and a wide researching experience in developing sensor systems for applications mostly related to human sensing, and human-robot interaction, authoring several publications on her specific work.

Dana

2016-10-14 14:00~15:00 in room 9-452
Prof. D. Kulic (Waterloo University, Canada)
Accurate Measurement and Modeling of Human Motion for Rehabilitation and Sports Training

In this talk, we will describe our work developing systems for on-line measurement and analysis of human movement that can be used to provide feedback to patients and clinicians/coaches during the performance of rehabilitation and sports training exercises. The system consists of wearable inertial measurement unit (IMU) sensors attached to the patient’s limbs. The IMU data is processed to estimate joint positions. We will describe an approach to improve the accuracy of pose estimation via on-line learning of the dynamic model of the movement, customized to each patient. Next, the pose data is segmented into exercise segments, identifying the start and end of each motion repetition automatically. The pose and segmentation data is visualized in a user interface, allowing the patient to simultaneously view their own movement overlaid with an animation of the ideal movement. We will present results of user studies analyzing the system capabilities for gait measurement of stroke patients undergoing gait rehabilitation, and demonstrating the significant benefits of feedback with patients undergoing rehabilitation following hip and knee replacement surgery. We will also highlight recent results applying deep learning techniques to understand and classify large-scale sports training data.

Dana KULIC (クリチ ダナ) received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2002 to 2006, Dr. Kulic worked with Dr. Elizabeth Croft as a Ph. D. student and a post-doctoral researcher at the CARIS Lab at the University of British Columbia, developing human-robot interaction strategies to quantify and maximize safety during the interaction. From 2006 to 2009, Dr. Kulic was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan, working on algorithms for incremental learning of human motion patterns for humanoid robots. Dr. Kulic is currently an Associate Professor at the Electrical and Computer Engineering Department at the University of Waterloo, Canada. Her research interests include robot learning, humanoid robots, human-robot interaction and mechatronics.

 

L.Rincon

2016-7-1 10:30~12:00 in room 9-452
Dr. L. Rincon
Modeling, identification and control for CNC machine tool as robotic system for high speed and precision control with MEMS for micro/nano applications

The CNC (Computer Numerical Control) machine tools are complex mechatronic systems applied in the macro and micro manufacturing of high precision. This robotic system is characterized by the behavior in its kinematics and dynamics in relation to the electromechanical and control system. The high performance of this machine is determined by the precision, efficiency and velocity in each subsystem. Disturbances as vibrations, temperature changes and internal friction, which occur during machining process, should be reduced to achieve operational efficiency. For this purpose, the control architecture is designed using advanced control strategies based on the nonlinear modeling. The design of the controllers in this kind of machines for macro applications have been a base with potential structures to apply in many contexts. However, in micro/nano applications and bio-environment, there are other challenges in control strategies and structures according to the requirements in these scales. How can we do it? This talk will present a proposal of the dynamic modeling, identification and control system to increase the performance in a CNC machine tool, applying predictive and robust control strategies for macro tasks; and the changes and vision for CNC machine tool for micro-milling process. Finally, the control systems for bionanotechnological platforms with MEMS (Micro Electro Mechanical Systems), for producing micro/nano particles, and micromanipulation will be shown, applying microfluidic devices, micro robotic station, and optimized microgrippers.

Liz Katherine Rincon A. is a researcher in Control, Robotics, and Automation Systems. She worked as postdoctoral researcher in projects of biotechnology and nanotechnology at the Institute for Technological Research of São Paulo, Brazil, at the Center of BioNanoManufacturing (2013-2015).  These projects were developed in automatic control system to produce nano/micro particles for medical applications; and the fabrication and control of microactuators for micromanipulation. In 2014-2015, she participated as researcher on the BRAGECRIM project, Brazilian-German Collaborative Research Initiative on Manufacturing Technology in association with the Technische Universität of Berlin/Germany, in MicroMilling Optimization Process. In 2013, she obtained her Ph.D. in Mechanical Engineering developed at the University of Campinas, UNICAMP, Brazil and L'Ecole Supérieure d'électricité, SUPELEC, France, researching in the Dynamic Behaviour of the CNC machine tools with emphasis in Control Architecture, and optimal control strategies. In 2008, she received her M.S. in Electronic and Computer Engineering of the University of Andes, Colombia, with her research in Control Education: Theory, Practice and learning evaluation, and her bachelor in Electronic Engineering in 2003. She has experience in teaching, and engineering projects of automation and control for industries. Her areas of research are Advanced Control Systems, Robotics, Mechatronics, Artificial Intelligence, Modeling/Simulation, Control of Micro Electro Mechanical Systems (MEMS), Identification and Parameter Estimation, and Automation Systems.


Naoko Abe

2016-4-27 11:00~12:00 in room 9-452
Dr. N. Abe (EHESS, Paris, France)
Kinetography Laban and its application to Robotics

Abstract: Kinetography Laban is one of the movement notation systems used worldwide. The movement notation is a system for recording and analyzing human movements by using specific symbols, devised in the dance field. Developed since the 15th century, today, approximately 90 notation systems exist in Europe and North America. The primary purpose of movement notation is information storage, teaching technical exercises of dance, and creating and reconstruction of choreography. In this seminar, a historical overview of the movement notation systems will be presented. The usefulness of these notation systems is not limited in dance, but also used in various research fields on movement. The seminar presents a work carried out in the robotics team at LAAS-CNRS during one year and half. This work aims to design a motion generation programming of humanoid robot (ROMEO) with using Kinetography Laban. The seminar provides an analysis of the usefulness and the limits of the use of Kinetogprahy Laban for motion programming.

Biography: Naoko Abe received her Bachelor and Master degrees in Sociology from Paris Descartes University. She obtained a PhD degree in Sociology from EHESS (School for Advanced Studies in Social Sciences) at Paris in 2012. Her PhD research was carried out in collaboration with RATP (Parisian Public Transportation Authority) from 2008 to 2012. She obtained an advanced teaching and notation certificate in Kinetography Laban in 2011 from CNSMDP (Paris Conservatory). She has been a Postdoctoral Fellow at the LAAS-CNRS (Laboratory for Analysis and Architecture of Systems - French National Centre for Scientific Research) at Toulouse in France from April 2014 to September 2015. She is currently a visiting scholar at the EHESS France-Japan Foundation since October 2015.


2016-4-15 11:00-12:00 9-505
Prof. D. Lestel (Department of Philosophy, Ecole normale supérieure, Paris, France)
Is It Alive? The Challenge of Autonomous Artifacts
This seminar is sponsored by the "International research group of the medicalrobotics for the development of personal medical machines".

Abstract: Impressive development of autonomous artifacts since the end of the twentieth century leads to numerous basic problems – technical problems, of course, but also psychological, sociological, legal and philosophical ones. Among them, a recurring question is to know whether some of these artifacts could be seen as being alive or not. In this talk, I shall try to show that one could not only tell that these artifacts are alive but that they also force us to conceive in a different way the very notion of what it means to be alive. Therefore, I shall propose an original approach not of “life” but of “living agent” – a situated, relational and constructivist approach of what is means to be a living agent based upon a reinterpretation of the Turing Test.

Biography: Dominique Lestel is an Associate Professor in the Department of Philosophy of the Ecole normale supérieure of Paris (ENS) where he teaches contemporary philosophy and works mainly on the philosophy of human/non-human shared life. He has been a research engineer in Bull Artificial Intelligence Lab (1984-1986) and got a Ph.D. of the EHESS in 1986. He introduced cognitive sciences at ENS in 1994, with logician Giuseppe Longo and physicist Jean-Pierre Nadal and he has been a founding member of the Department of Cognitive Sciences of ENS until 2012. He has got research positions at University of California, MIT, Boston University, Université de Montréal, Macquarie University and has been a Visiting Professor at the School of the Art Institute of Chicago, at Tokyo University of Foreign Language and at Keio University.  In 2013-2014, he was a visiting scientist at the University of Tokyo, in the Japanese-French Laboratory of Informatics with a grant by the French National Center of Scientific Research (CNRS) to work on the philosophy of existential robotics. He has published many books including “Eat that Book. A Carnivore’s Manifesto”, 2016, Columbia University Press. In 2014, the Oxford journal “Angelaki: Journal of the Theoretical Humanities” has published a special issue on his work.

2016-3-15 10:30-12:00 9-505
Prof. T. Asfour (Karlsruhe Institute of Technology, Germany)
On Grasping and Balancing in Humanoid Robotics

This seminar is sponsored by the Global Innovation Research Initiative.

Abstract: Exploiting the interaction with the environment is a powerful way to enhance robots’ capabilities and robustness while executing tasks in the real world. The first part of the talk will give an overview on our current work in the area of humanoid grasping and learning from human observation. The second part will discuss the dualities between grasping and balancing in humanoid robotics. Exploiting the fact that the structure of hands holding objects is very similar to whole-body balancing with multi-contacts, we show how principles applied to grasping can be transferred to balancing in humanoid robotics. We present taxonomy for whole-body poses - whole-body grasps - and its validation based on human motion capture data. Further, we show how co-joint object-action representations used for object grasping can be extended to associate whole-body actions with affordances of objects and environmental elements in the scene. We demonstrate how such affordance hypotheses are generated through visual exploration and verified using haptic feedback.

Biography: Tamim Asfour is is full Professor at the Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT) where he holds the chair of Humanoid Robotics Systems and is head of the High Performance Humanoid Technologies Lab (H2T). His current research interest is high performance 24/7 humanoid robotics. Specifically, his research focuses on engineering humanoid robot systems able to predict, act and learn from human demonstration and sensorimotor experience.  He is developer of the ARMAR humanoid robot family and is leader of the Humanoid Research Group at KIT (2003-now). He is Founding Editor-in-Chief of the IEEE-RAS Humanoids Conference Editorial Board, co-chair of the IEEE RAS Technical Committee on Humanoid Robots (2010-2014), Editor of the Robotics and Automation Letters, Associate Editor of Transactions on Robotics (2010-2014). He is president of the Executive Board of the German Association of Robotics (Deutsche Gesellschaft für Robotik, DGR) and member of the Board of Directors of euRobotics (2013-2015).

2015-11-26 11:00-12:30
Dr. A.-L. Jouen (University of Tokyo)
From fundamental neuroscience to social robotics How Neuroscience can help to “cross the Uncanny Valley”?

Abstract: My initial works were dedicated to study the functionality and anatomy of the cerebral network involved in semantic understanding. The theory of embodied cognition postulates that semantic processing corresponds to a cerebral reactivation of previous sensorimotor experiences and, in the literature, a wide network of areas are described as being involved in meaning process. But there is still a lack of consensus on the amodal nature of this network and its connectivity remains poorly known. Through different experimental protocols involving mainly neuroimaging techniques (fMRI, DTI, EEG), we were able to reveal the neurophysiological basis of this common semantic network, by defining a fronto-temporo-parietal “meaning” network involved in understanding of both rich sentences and scenes. DTI tractography revealed a specific architecture of white matter fibers supporting this network. The network substantially overlaps with the “default mode” network implicated as part of a core network of semantic representation, along with brain systems related to the formation of mental models and reasoning. These data argue that the embodiment of understanding extends well beyond the sensorimotor system to the highest levels of cognition… Indeed, this fronto-temporo-parietal network is not only involved in semantic processing but also in social cognition functions (imitation, empathy, theory of mind, interactions…) which make us uniquely humans. This fronto-temporo-parietal system would have been evolutionary designed to give us the capability to fully understand and interact with others. These results helped me to define my reflection as I became a postdoctoral researcher: how can we use this neuroscientific knowledge to create more interactive robots able to fully socially interact with us by understanding our language, actions and feelings? Indeed, as robots will become more and more present in our societies, they will have to interact naturally with us in everyday life activities. However, there is a lack of understanding of how robots and our interactions with them are perceived and treated by the human brain. Of course, it can be due to remaining technical limitations of the robots which make that they are treated more like machines but it is also due to misunderstanding of what happens in the human brain when interacting not only with robots but also with other human beings. Since then, I have adopted a research approach combining both experiments on social interactions between humans and between humans and robots. During my first post-doctorate at the Institute for Intelligent Systems and Robotics (ISIR, Paris), I studied the brain markers involved in social interactions (imitation, joint attention and synchrony) by using EEG methods and with various populations (adults, typically developing children and autistic children). In Japan, I joined Professor Hiraki’s laboratory which is specialized in the understanding of child development and mother-child interactions. Through psychological and neuroscientific methods, my experiments in this lab will focus on the evaluation of robots as pedagogical tools in the context of language learning but also on empathy towards robots. Through this presentation and illustrated by my professional background, I will present how neuroscience knowledge is fundamental to better understand our interactions with robots: this knowledge represents one step further in turning robots into better social partners for humans and, somehow, “cross the uncanny valley”.

Biography: Dr. Anne-Lise Jouen is a post-doctoral researcher at the University of Tokyo, Graduate School of Arts and Science. After studying Psychology, she specialized in Cognitive Science and Neuropsychology by obtaining two M.Sc. degrees in 2009. Dr Jouen received her Ph.D. in Neuroscience from the University of Lyon in 2013. Her thesis work, realized under the direction of Peter F. Dominey and Jocelyne Ventre-Dominey, was entitled “Beyond words and images, neurophysiological basis of a common semantic system for sentences and visual scenes understanding. Between 2013 and 2015, she worked on social signal processing and neurophysiological markers of interactions between humans and robots, at the Institute for Intelligent Systems and Robotics (ISIR, Paris) in the team of Professor Mohamed Chetouani. Funded by a JSPS fellowship for two years, she joined the laboratory of Professor Kazuo Hiraki in 2015 to work on the neuroscientific evaluation of robots as pedagogical tools. Although her professional training is mainly related to Neuroscience field, she has always been passionate about Robotics. She has made the choice to work in teams combining Robotics and Neuroscience and she had the opportunity to participate several times at the RoboCup@home league competition. Her final aspiration would be to do applied research on new technologies for aging or people with cognitive deficits.

2015-10-28 11:00-12:00
Prof. B. Bardy (Euromov, Montpellier, France)
Creating new human-artificial agent interactions to enhance social competence in patients suffering from social deficits

This seminar is sponsored by the "International research group of the medicalrobotics for the development of personal medical machines".

Abstract: Schizophrenia, autism, or social phobia are typically accompanied by social interaction deficits. The objective of the AlterEgo European project is the creation of a flexible interactive cognitive architecture, implementable in various artificial agents, allowing a continuous interaction with patients suffering from social disorders by virtue of changes in behavioral (robot-based) as well as morphological (avatar-based) properties of that agent. In this presentation, I will present the scientific foundations of the AlterEgo project and its main predictions, grounded in the Similarity concept, originating from Neurosciences, Robotics, and Social Sciences. I will present the main results of the project, which show that patients functionally adapt their social motor interaction when they interact with agents (real or artificial) morphologically and behaviorally similar to them, as a route toward more natural interactions. These results have consequences for the implementation of digital cognitive architectures in the clinical context, and for the rehabilitation of socially deficient patients.

Biography: Professor Benoît Bardy is full professor at the University of Montpellier and a senior member of the Institut Universitaire de France. He is founder and director of EuroMov (www.euromov.eu), the European center for research, technology, and innovation in the art and science of movement, located in the Languedoc-Roussillon region. Prof. Bardy’s research is concerned with dynamical approaches to problems of coordination and control of movements. Two major aims of his work are to investigate how the numerous muscles, joints, or segments that constitute the human body are coordinated so as to promote functional actions, and to identify the role of movement-based information in the control of movement, with a specific interest for real world actions. Prof. Bardy is the author of 200+ scientific articles, and the coordinator for the European Commission of the BeatHealth (www.euromov.eu/beathealth) and the AlterEgo (www.euromov.eu/alterego) European projects.

2015-10-13 15:00-16:00
J. Bitton (Harvard University, Graduate School of Design, Cambridge, USA)
Living in the Material World

Abstract: My research aims to correlate emerging trends of personal fabrication and possibilities of data tracking and collection. I’m proposing to use human-based data as parameters for machine control in a fabrication process. This is done in the context of interactive art and design experimentations that typically offer outlets for technological innovations, making them available and accessible to the public as they become mainstream. Overall, this presentation will address the processes that are evolving with tools and material innovations and the creative uses that absorb, subvert and reinvent these technologies in our daily life.

Biography: Joëlle Bitton is an artist and a human-computer interaction researcher. In 2000, she co-founded an experimental art and design collective in Vienna, "Superficiel" in support of works that explore the ideas of surface, screen, and body movement as interfaces. She's currently enrolled as a doctor of design candidate at Harvard Graduate School of Design. Her thesis addresses interactive processes in digital fabrication with the implication of personal data. Previously, she researched the creative uses of technology at Culture Lab, Newcastle University. And at MIT Media Lab Europe, in the group‘Human Connectedness she explored the richness of everyday life and intimacy at distance with the projects “RAW” and “Passages”. Her work has been featured among others at ISEA, CHI, EXIT, Centre Pompidou, and Gallery éf. http://joelle.superficiel.org/

2015-8-28 11:00-12:00
Dr. A. Paolillo (LIRMM, Montpellier, France, Researcher)
Vision-based control of humanoid robots interacting with the real world

Abstract: Humanoids are high-performance robotic platforms, suitable to achieve autonomous and human-like behaviors by virtue of their dexterity and versatility capabilities. Thanks to these peculiarities, humanoids are intended to help humans and, in order to perform tasks of everyday life, they shall be ready to localize themselves, safely navigate in the surrounding environment, and operate humans' tools and machineries. In this talk, we will discuss the use of vision to improve the performance of humanoids to achieve these high-level tasks and reach higher level of autonomy. In particular, we will see how the visual perception, fused with other sensory information, can improve the localization of humanoids. It will also shown that, if the robot motion cannot be explicitly estimated, a humanoid can safely navigate by using some environmental visual features, like the guidelines of a corridor. The same navigation paradigm has been used to make a humanoid drive a car, a representative example of human-tailored device operation by humanoids. Experiments carried out with real humanoid robots will show the effectiveness of the approaches, and the importance of the visual perception in the humanoids research.

Biography: Antonio Paolillo is a post-doc researcher at CNRS-UM2 LIRMM, Montpellier, France. Currently he is spending a visiting period at CNRS-AIST JRL, Tsukuba, Japan. He received his Ph.D. in System Engineering in 2015 and his Master (Laurea) degree in Electronic Engineering in 2011 both from Sapienza University of Rome, Italy. From January to July 2014 he was a visiting scholar at LIRMM. In 2010 he was a visiting student at the AASS lab, Örebro University, Sweden. His research interest focuses on the control of humanoid robots.

2015-7-10 13:00-14:00
Dr. L. Aymerich-Franch (CNRS-AIST JRL, Researcher)
Mediated embodiment: studying embodiment in avatars and humanoids to understand human behavior and the self

Abstract: The talk will address the topic of mediated embodiment in virtual reality and humanoid robots and its consequences on transforming people’s attitudes and behaviors. Mediated embodiment is the technologically induced illusion of adopting an artificial body in which one perceives to be located. In the current state of the art, embodiment in artificial and digital bodies is mostly studied in the form of avatar in virtual reality. In her current project, Dr. Aymerich-Franch applies theories and approaches from this field to study similar patterns in the field of humanoid robots. In connection to this, the presenter will also describe the possibilities of using virtual reality and robots as methodological tools to explore social and psychological phenomena. She will present the most interesting findings from her work at the different labs she has worked and will use her studies as example cases of how communication technologies can be used by researchers as innovative methods to explore phenomena that was not possible to study before the advancements of these new technologies.

Biography: Dr. Laura Aymerich-Franch is a Marie Curie Postdoctoral Fellow at the CNRS-AIST Joint Robotics Laboratory at the National Institute of Advanced Industrial Science and Technology, in Tsukuba, Japan, where she studies mediated embodiment in humanoids. Before joining JRL, she was a Fulbright postdoctoral scholar at the Virtual Human Interaction Lab, Stanford University, CA (2012-2014), where she worked with virtual reality. She has a background in Communication and Media Psychology.

2015-6-19 11:00-12:00
Dr. R. Diankov (Mujin technologies, CTO)
Mujin technologies

Abstract: Industrial manufacturing automation is in some sense the ultimate robotics problem: the challenge to create a completely autonomous orchestra of robotics and automation systems to complete a specific task as fast as possible, as robustly as possible, as safe as possible, and as cheap as possible. Mujin is one company out of thousands in the world tackling how to solve this challenge, and in the process it plans to deploy hundreds of thousands of systems to every manufacturing plant and distribution center in the world. In this talk, I will talk about these challenges and the robotics technologies that Mujin employs to deliver robotics systems. This includes state of the art motion planning algorithms, accurate object pose recognition using advanced computer vision techniques, and the overall robotics architecture involved.

Biography: Rosen's one and only passion is to increase worldwide quality of life by deploying as many automation systems as possible into the real world. After getting a PhD from the Robotics Institute at CMU, his dreams led to co-founding Mujin, a robotics startup employing advanced AI technologies, in the world's most hardcore and intense manufacturing environment: Japan. He is a true believer of open-source and has founded and developed the widely used robot motion planning system OpenRAVE, which comprises the core of the Mujin technology stack and is used by many robotics labs around the world.

2015.6.10 10:30-12:00
Prof. R. Dumas (IFSTTAR-University of Lyon, France)
Multi-body optimisation-from kinematic constraints to knee contact and ligament forces

Abstract: This presentation gives a quick overview on the theoretical and numerical aspects of the development of a multi-body dynamic model of the lower limb.
The model includes foot, shank, patella, thigh, and pelvis segments. Anatomical kinematic constraints are introduced in order to model the ankle, tibio-femoral and patello-femoral joints with “parallel mechanisms” (e.g., sphere-on-plane contacts, isometric ligaments). A muscular geometry is also introduced. A first constrained optimisation (i.e., inverse kinematics) is performed to compute the kinematics of the model driven by skin markers.
A second constrained optimisation (i.e., inverse dynamics - static optimisation) was performed to compute the corresponding dynamics including the musculo-tendon forces, contact forces and ligament forces.

Biography: Rapahael DUMAS holds an Engineer and M. Sc. in Mechanics (INSA de Lyon, 1998), Ph.D. in Biomechanics 2002 (ENSAM de Paris, 2002). He is currently Senior Researcher at IFSTTAR – Université de Lyon. He is member of the Laboratoire de Biomécanique et Mécanique des Chocs.
His research interest is in three-dimensional multi-body modelling of the human musculoskeletal system applied to joint pathologies, postural and gait impairments. He is a member of the boards of Francophone Society of Biomechanics and Francophone Society for Movement Analysis in Child and Adult. He is regular reviewer for journals in the field of biomechanics. He has authored or co-authored 59 archive-journal full-papers, 4 book chapters and 2 patents.

015.6.10 10:30-12:45
Dr. T. Robert (IFSTTAR-University of Lyon, France)
Possible recovery or unavoidable fall? Numerical tools to predict the outcome of a balance perturbation for standing humans

Abstract: Falls resulting from loss of balance are a major source of injuries worldwide, especially among the elderly. These injuries are a considerable burden on public health-care budgets in many western countries increasing every year due to ageing population. In order to address this problem, it is important to understand the phenomenon of balance recovery and to be able to identify potentially hazardous situations. An important step in this regard is to be able to predict the outcome of a balance perturbation and the protective actions taken by humans to avoid a fall.
This talk will present the tools we recently developed to predict if a balance perturbation will inevitably result in a fall or can be recovered. These models are inspired by recent developments in the field of humanoid robots, transferred and adapted to humans.

Biography: Thomas Robert is a research associate at IFSTTAR-Université de Lyon. Engineer in Mechanics, he receive his MSc in mechanics in 2002 and complete his PhD in biomechanics in 2006. After a postdoctoral fellowship at the University of Pennsylvania, he joined the Biomechanics and Impact Mechanics Laboratory in Lyon. He is doing research on the analysis and simulation of human movement, with a particular emphasis on the standing equilibrium in perturbed environments.

2015.4.15 10:30-12:00
Prof. Dana Kulic (Waterloo University, Canada)
Motion Measurement and Analysis for Real-time Feedback during Rehabilitation

Abstract: In this talk, we will describe an automated rehabilitation system for on-line measurement and analysis of human movement that can be used to provide real-time feedback to patients during the performance of rehabilitation exercises.  The system consists of wearable IMU sensors attached to the patient’s limbs.  The IMU data is processed to estimate joint positions.  We will describe an approach to improve the accuracy of pose estimation via on-line learning of the dynamic model of the movement, customized to each patient.  Next, the pose data is segmented into exercise segments, using a novel approach based on two class segmentation.  The pose and segmentation data is visualized in a user interface, allowing the patient to simultaneously view their own movement overlaid with an animation of the ideal movement.  We will present results demonstrating the significant benefits of feedback based on a user study with patients undergoing rehabilitation following hip and knee replacement surgery.

Biography: Dana KULIC (クリチ ダナ) received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2002 to 2006, Dr. Kulic worked with Dr. Elizabeth Croft as a Ph. D. student and a post-doctoral researcher at the CARIS Lab at the University of British Columbia, developing human-robot interaction strategies to quantify and maximize safety during the interaction. From 2006 to 2009, Dr. Kulic was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan, working on algorithms for incremental learning of human motion patterns for humanoid robots. Dr. Kulic is currently an Associate Professor at the Electrical and Computer Engineering Department at the University of Waterloo, Canada. Her research interests include robot learning, humanoid robots, human-robot interaction and mechatronics.

2015.3.18 15:00-16:00
Prof. Celine Mougenot (Tokyo Institute of Technology)
Designing technological products with a user-centered approach

Abstract: Designers of technological products should not only care about the technology itself but also about the needs and expectations of people who will use these products. A user-centered design approach is necessary for designing, developing and making things that really fulfill human expectations, and even dreams. In this seminar, I will give a glimpse of user-centered design approach and its research implications, including research in "affective engineering". The talk will include examples of user-centered devices designed by our lab at Tokyo Tech.

Biography: Celine Mougenot (ムージュノ セリーヌ) is an Associate Professor at Tokyo Institute of Technology. She worked at Dassault Systems as a Computer-Aided Design engineer then later received a Ph.D in Design Engineering from Arts et Metiers ParisTech. She came to Japan in 2009 and joined Tokyo Tech in 2011. Her research interests include user-centered design, design creativity and emotions in design.

2015.1.21 10:00-11:00
Dr. Yoichi Asano (JARI, Japan)
Current status and issues of the welfare robots

Abstract: The robot industry such as medicine, care, disaster prevention and inflastructure is placed as important Japanese growth strategy. On the other hand, the popularization of such robots, there are many issues in technological, safety and research of the needs. In this seminar, I will introduce the current status and issues of the welfare robots as an example.

Biography: Yoichi Asano (浅野陽一) is a researcher at the Japan Automobile Research Institute (JARI). He received Ph.D in innovative technology and science from Kanazawa university in 2007, and M.Eng. in mechanical and system engineering from Tokyo University of Agriculture and Technology in 2001 . His research interests include human factors, vehicle dynamics, safety of robots.

2014.12.9 17:30-18:30
Prof. Massimiliano Zecca (Loughborough University, UK)
Observation, analysis and clarification of human performance in high dexterity tasks

Abstract: Our society is becoming older and older, with more than 16% of the world population expected to become over 65yo by 2050. This percentage is dramatically higher for the more developed countries (it is already over 24% in Japan and 22% in Italy). This ageing society need and will need more and more support, both physically and mentally. One possibility to provide this support comes from robots in general and humanoid robots in particular. These robots need to smoothly integrate in the human environment, and they need to communicate with people in a natural and simple way.
On the other hand, there is a growing need for quantitative measurement and assessment in medical applications, ranging from the training of young doctors to the rehabilitation of elderly people.
Both applications fields, while apparently unrelated, share the same needs: the observation, the analysis and the clarification of human performance in high dexterity tasks.
In today’s talk we will see several examples of my past and on going research activities, and we will highlight the challenges we are going to face in the near future.

Biography: Massimiliano Zecca is full professor of Healthcare Technology in the School of Electronic, Electrical and Systems Engineering of Loughborough University, United Kingdom. He is also a key member of the National Centre for Sports and Exercise Medicine – East Midlands, Loughborough, United Kingdom, and of the NIHR Leicester-Loughborough Diet, Lifestyle and Physical Activity Biomedical Research Unit, Loughborough, United Kingdom.
Prof. Zecca received the University degree (Laurea) in Electrical Engineering from the University of Pisa, Italy in 1998, and the Ph.D. degree in Biomedical Robotics from the Scuola Superiore Sant’Anna, Italy, in 2003.
Before joining Loughborough University he has been working in Waseda University, Tokyo, Japan, from 2003 to 2013, in the Integrated Mind-body Mechanism Laboratory, department of Modern Mechanical Engineering, School of Creative Science and Engineering, which is also part of TWIns, the Center for Advanced Biomedical Sciences of Waseda University, in collaboration with the Tokyo Women’s Medical University.The main scientific interests of Prof. Zecca are the observation, the analysis and the clarification of human performance in high dexterity tasks, with a specific interest for human-robot interaction and medical training.

2014.11.28 Double seminar
Part 1: Antoines SEILLES (CEO, NaturalPad)
MediMoov: Physio-Gaming

Abstract: Population increase and aging as generated major therapeutic challenges (e.g., incidence of cognitive decline, movement disorders, and stroke) that are becoming a growing economic burden for health care systems worldwide. Accessibility to reeducational resources are limited, thus additional approaches are increasingly explored such as home-based rehabilitation programs. NaturalPad, a company that specialize in physio-gaming, develops therapeutic movement based video games for health. With the combination of applied research in health-science and active participation of occupational therapists, our team creates entertaining games for physiotherapeutic rehabilitation. Using motion capture, we facilitate the amplitude and reach of movements to produce lasting improvements. Other exploited techniques involve rhythmic auditory stimulation that augments temporal prediction and thereby facilitate movement planning and initiation. NaturalPad also develops MediMoov, a web platform for physio-gaming. This service will provide home availability of our games to the clinical population. Our products are currently under scientific validation being implemented in physiotherapist cabinets around France where clinical data are gathered for further publications. Physio-gaming propose an efficient self-motivated therapeutic method to improve mobility, independence, and quality of life

Biography: Antoine Seilles has completed my PhD in computer sciences under the co-direction of Jean Sallantin and Nancy Rodriguez (LIRMM) in France. The thesis revolved around Big Data and e-democracy, the use of Web semantic technologies for representation of online debates and Social Network Analysis. He has co-founded NaturalPad in August 2013. >

Part 2: Jean-Charles Etienne Benoit (Research engineer, Euromov)
Benefits of auditory cueing on Parkinsonian gait: Effect of individual differences

Abstract: Auditory cueing is a viable method to improve gait kinematics in patients with Parkinson’s Disease (PD). In the presence of rhythmical auditory cues (e.g., a metronome or music) PD patients tend to walk faster and to increase their stride length. These beneficial effects of auditory cueing are seen during the stimulation, and can carry over to uncued gait after a period of training. However, important individual differences can be observed. Some patients benefit from the training, whereas others are non-responders. A positive response to auditory cueing may be linked to spared timing mechanisms, which allow patients to synchronize their steps to the auditory cues during training. This possibility was examined in 15 PD patients, who were submitted to extensive testing of perceptual and sensorimotor timing as well as gait kinematics before and after a four-week auditory-cuing training program. During training patients walked with familiar music having an embedded metronome, presented at a tempo around their spontaneous cadence. Perceptual and sensorimotor timing was evaluated with the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). Patients’ performance was compared to that of 20 age-, gender-, and education-matched healthy controls. The training led to improvements in most of the patients’ gait behavior (i.e., with increased speed and greater stride length), visible up to 1 month after the therapy. The performance in a synchronized tapping task before the training was a good predictor of the success of auditory cueing. Patients showing high accuracy and low variability in synchronization were most responsive to the training. Moreover, patients, who were most responsive to cues before training (i.e., showing an increase in speed and stride length) were most likely to improve their gait after the training. These findings indicate that individual differences in terms of sensorimotor timing, a crucial skill needed when walking with an auditory cue, are crucial when deciding on training programs such as auditory cuing in the rehabilitation of PD.

Biography: Jean-Charles Etienne Benoit has completed his master degree in Quebec (Canada) under the co-direction of Dr. Remi Quirion at McGill University (Montreal) and Dr. Philippe Sarret at Sherbrooke University (Sherbrooke). The work revolved around neuropharmacology and the use of cognitive enhancer to improve memory performance. This lead to quality publications in Journal of Neuroscience and Trends in pharmacological science. He was then recruited under a Marie-Curie Foundation grant for doctoral research in musical psychology. The thesis revolves around the use of music and rhythm as a form of training for movement disorders in Parkinson`s disease. During the time of the thesis, he worked between Warsaw (Poland), Leipzig (Germany) and Montpellier (France) in co-direction with Dr. Simone Dalla Bella (Euromov) and Dr. Sonja Kotz (Max Planck Institute).

2014.10.29 by Ko Yamamoto (Assistant Professor, University of Tokyo)
Unified Framework for Switching Multiple Control Strategies of Humanoid Robot

Abstract: Human-like bipedal walking is a goal of humanoid robotics. It is especially important to provide a robust falling prevention capability by imitating the human ability to switch between control strategies in response to disturbances, e.g., standing balancing and stepping motion. However, the motion control of a humanoid robot is challenging because the contact forces are constrained. In this talk, I will introduce a unified framework for control strategy switching based on the maximal output admissible (MOA) set, which is a set of initial states that satisfy the constraints.

Biography: Ko Yamamoto (山本 江) is an assistant professor at Department of Mechanical Engineering, University of Tokyo. He received the B.S. and M.S. degrees from University of Tokyo, in 2004 and 2006, respectively. He also received his Ph.D. from the University of Tokyo in 2009. He was a postdoctoral research fellow at Tokyo Institute Technology (2009-2012), and an assistant professor in Ecotopia Science Institute, Nagoya University (2012-2014). He moved to University of Tokyo as an assistant professor in April, 2014. From Oct. to Dec. 2012, he was a visiting researcher in Stanford University. His research interests include mechanical design and motion control of humanoid robots, and control of swarm behavior in pedestrian flows.

2014-9-16 Sebastien Cagnon (Head of Technical Support, Aldebaran, Tokyo, Japan)
Creating Efficient Human Robot Interface for Humanoid Robot Applications

Abstract: How to create applications and make robots usable. Indeed, when you have a robot with no interface with buttons to click, it gets quite confusing. And although you can react to what the user says, the user usually doesn't know what to say. How do you create an interface that feels as natural as possible? How do you make sure the user will say the right words? How do you manage the uncertainty of the real world and of the user reactions?

Biography: Sebastien Cagnon (セバスティアン カニオン) is the head of Technical Support at Aldebaran Japan. He holds a Master of Mechano-Informatics from the University of Tokyo. Sebastien has created dozens of applications on Nao and Pepper during 4 years. His work as a robotic app creator leads him to always explore new ways to interact between humans and robots.

2014.7.29 11:00-12:30 Dr. Sung Park (UX Engineering Group, User Experience Center - DMC R&D Center, Samsung Electronics Co. Ltd., Korea)
Toward a Psychological Science of Robotics Design

Abstract: Developers of humanoid robots strive to build the most effective and useful robots within their design space. This requires decision on the functions of robots, the interaction with users, and the kind of user benefit. Psychology has much to offer in making such decisions - from understanding what users need to defining the capabilities and limitations of design characteristics. This talk will highlight how research in psychology has advanced the understanding of robotics design and what is required for developers to readily benefit from this. Discussion on the implications to and the future of HRI research will be followed.

Biography: Sung Park (ソーン パーク) received the B.A. degree in psychology from Korea University in 2001, the M.Sc. degree in human-computer interaction from the University of Michigan, Ann Arbor in 2003, and the Ph.D. degree in engineering psychology from the Georgia Institute of Technology in 2009. He was a senior UX (User Experience) designer at Samsung Electronics and had 5 years of design and research experience across wide range of Samsung' products and services including Samsung's first generation medical devices. Most recently led the UX engineering group, responsible for renovating Samsung's design methods and processes. His research interest includes the psychological science of robotics design and the social interaction with non-human entities.

2014.5.21 15:00-16:30 Dr. Ganesh Gowrishankar (CNRS-AIST JRL, Intelligent Systems Research Institute, AIST, Tsukuba, Japan
Human Centric Robotics: from neuroscience to robot control for human-robot interactions

Abstract: Whenever two humans physically interact with each other, like during dancing Tango, their movements are determined by complex mechanical and control interplays between the motion and forces generated by each individual. Understanding these interplays are essential for the development of future robots in rehabilitation, biomedical devices and tele-operation systems so as to ensure that the interacting human is comfortable with the interacting robot, feels safe with them and benefits physically and psychologically from them. However, this is not a trivial task because human interactions change not only with individual body dynamics and control but change also with cognition, age and disease. The reason we feel comfortable when interacting with another human is because the other human can understand our behavior in all these aspects and respond accordingly - my research aims to develop similar abilities in future robots. Through integrated research in robotics, bio-mechanics, motor psychophysics, control and social neuroscience, I aim for a comprehensive understanding of human-robot interactions and develop human like interaction abilities in robots. In this talk I will introduce my work and present an example of a human interaction experiment to exhibit how mechanics, engineering, robotics and neuroscience can be combined together to understand human interaction dynamics and in turn be utilized to develop better interaction design and behavior in robots.

Biography: Gowrishankar Ganesh (ゴーリシャンカー ガネシュ) received his Bachelor of Engineering (first-class, Hons.) degree from the Delhi College of Engineering, India, in 2002 and his Master of Engineering from the National University of Singapore, in 2005, both in Mechanical Engineering. He received his Ph.D. in Bioengineering from Imperial College London, U.K., in 2010. He worked as a Researcher in the Lab of Dr. Mitsuo Kawato at the Advanced Telecommunication Research (ATR), Kyoto, Japan, from 2004 and through his PhD. Following his PhD, he worked at the National Institute of Information and Communications Technology (NICT) as a Specialist Researcher till December 2013. Since January 2014, he has joined as a Senior Researcher at the Centre National de la Recherché Scientifique (CNRS), and is currently located at the CNRS-AIST joint robotics lab (JRL) in Tsukuba, Japan. He is a visiting researcher at the Centre for Information and Neural Networks (CINET) in Osaka, ATR in Kyoto and the Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM) in Montpellier. His research interests include human sensori-motor control and learning, robot control, social neuroscience and robot-human interactions. His recent awards include the Best Robot Video award at IROS 2012, Best journal paper award in 2011 in IEEE Trans. on Robotics and the Best Cognitive Robotics Paper nominee at ICRA 2010.

2014.3.18 14:30-15:30 Prof. Wisama Khalil (Institut de Recherche en Communication et Cybernetique de Nantes, France)
Dynamic Modeling of Floating Systems: Application to Eel-like Robot and Rowing system
This seminar was part of the Japan-IFToMM Robotics Seminars

Abstract: This talk presents the dynamic modeling of floating systems with application for three-dimensional swimming
eel-like robot and rowing-like system. To obtain the Cartesian evolution during the design or control of these systems the dynamic models must be used. Owing to the complexity of such systems efficient and simple tools are needed to obtain their model. For this goal we propose an efficient recursive Newton-Euler approach which is easy to implement. It can be programmed either numerically or using efficient customized symbolic techniques

Biography: Prof. Wisama Khalil (カリル ウイサマ) received the Ph.D. and the “Doctorat d’Etat” degrees in robotics and control engineering from the University of Montpellier, France, in 1976 and 1978, respectively. Since 1983, he has been a Professor at the Automatic Control and Robotics Department, Ecole Centrale de Nantes, France. He is the coordinator of Erasmus Mundus master course EMARO “European Master in Advanced Robotics”. He is carrying out his research within the Robotics team, Institut de Recherche en Communications et Cybernétique de Nantes (IRCCyN). His current research interests include modeling, control, and identification of robots. He has more than 100 publictions in journals and international conferences.

2014.2.28 Prof. Atsushi Hiyama ( "Senior Cloud", The University of Tokyo, Tokyo, Japan)
Can Telepresence Robot Revolutionize Labor Market of Hyper-aged Society

Abstract: Telepresence technology has a potential to enhance the quality of teleworking environment and expand the working field in teleworking market. In hyper-aged society, which is one of the major issues that Japan is facing, teleworking has a capability to provide more chances to engage into working society for senior citizens in the way that is suitable for their lifestyle after retirement. This talk will describe current status of telepresence robots in the market and introduce a designing method of telepresence system from the point of view of human-robot interaction.

Biography: Prof. Atsuhi Hiyama (桧山 敦) is Assistant Professor and Lecturer in the Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, where he has been since 2006. He graduated from The University of Tokyo with a B.E. in 2001 and M.S. in 2003 and received a Ph.D. in Engineering from The University of Tokyo in 2006. His research interests center on designing and implementing augmented reality, ubiquitous computing, and human-robot interaction systems. He introduced “ubiquitous gaming” as the first large-scale application of ubiquitous computing system for museum guide system at National Museum of Nature and Science, Tokyo in 2004. He is a member of ACM and IEEE.

2013.12.6 Dr. Vincent Bonnet (TUAT, GV Lab, Tokyo, Japan)
Toward minimum measured-input models for the understanding of human motor act

Abstract: The problematic of evaluating the physical functional limitations of patients suffering from postural pathologies and establishing the relationship between individual disability and biomechanics will be addressed in this talk.
Starting from popular and simple experimental paradigms used in rehabilitation field, predictive models based on biomechanical analysis, optimization processes and classical robotics tools have been developed. These models allow, at different levels of description in actuator dynamics and sensory feedback, a better understanding of the optimal behaviors and constraints acting on the postural system during rehabilitation exercises. After experimental validations on humans and humanoid robots, these models have been extended to capture specific invariants observed with hemiplegic patients.
In order to use the outputs of these models in clinical practice and daily life practice, new and low cost measurement methodologies for human movement have been developed in the purpose of minimizing the complexity of experimental setup. In these approaches, using a single inertial sensor or a mobile low cost robotic platform, global physical performance score could be provide as well as local variables related to segmental mechanics. Therefore, these new and affordable measurement methodologies contribute to establishing a relationship between biomechanics and disability. The validation of these approaches, carried out in the laboratory and with humanoid robot, has shown that they possess a strong potential for future rehabilitation applications. Finally a brief overview of ongoing and future collaborations and projects will be presented.


Biography: Dr. Vincent Bonnet (バンサン ボネ) received the B.E. degree in electrical engineering, in 2005, and the Ph.D. degree in automatics control and robotics, in 2009, both from the Science University of Montpellier 2. From 2010 to 2012, he has been working as a post-doc in Rome, Italy under the direction of Professor Aurelio Cappozzo, historical reference in Biomechanics and motion analysis. In 2013 he was teaching assistant at the University of Montpellier and belonged to the Euromov Institute where he was in charge of the research related to motion analysis. Since November 2013 he is a JSPS visiting researcher with the GV Lab at Tokyo University of Agriculture and Technology. His research interests include various aspects of low-cost sensors, motion analysis, humanoid robotics, system identification and control. He teaches courses in kinematics and dynamics, automatic and biomechanics.

2013.10.28 Silvio Traversaro (Italian Institute of Technology, Genova, Italy)
Inertial Parameter Identification on the iCub robot

Abstract: The iCub is an open humanoid robot developed at Italian Institute of Technology since 2005. In humanoid robots an accurate dynamic model is crucial for control and balancing. While most humanoid robots exploit joint torque sensing or end/effector force/torque sensing for identifying  the inertial parameters, the iCub is equipped by internal force/torque sensors originally designed for force control. This internal force/torque sensors can be used to perform base link dynamics inertial parameter estimation by considering separately the different subtrees that compose the robot dynamical tree structure.

Biography: Silvio Traversaro (シルビオ トラベルサロ) was born in Chiavari (Genoa) in 1988. He received his Bachelor's degree in Computer Science Engineering in 2011 and his Master's degree in Robotics Engineering in 2013, both from the University of Genoa (Italy). He is currently working at Italian Institute of Technology on inertial parameter identification and data fusion applied to robot dynamics, in the context of the CoDyCo European project.

2013.10.4 Dr. Vincent Berenz (RIKEN Brain Science Institute, Nagoya, Japan) How to program small-sized commercial robots to perform dynamic behaviors.

Abstract: Small humanoid robots are becoming more affordable and are now used in fields such as human-robot interaction, ethics, psychology, or education. For non-roboticists, the standard paradigm for robot visual programming is based on the selection of behavioral blocks, followed by their connection using communication links. These programs provide efficient user support during the development of complex series of movements and sequential behaviors. However, implementing dynamic control remains challenging because the data flow between components to enforce control loops, object permanence, the memories of object positions, odometry, and finite state machines has to be organized by the users. We developed a new programming paradigm, Targets-Drives-Means, which is suitable for the specification of dynamic robotic tasks. In this proposed approach, programming is based on the declarative association of reusable dynamic components. A central memory organizes the information flows automatically and issues related to dynamic control are solved by processes that remain hidden from the end users.

Biography: Vincent Berenz (べレンズ バンサン) received a Chemical Engineering degree from Ecole Superieure de Chimie Organique et Minerale (France), a Master of Business Engineering Bio-informatics from Ecole de Biology Industrielle (France) and a Ph.D. in Engineering (Intelligent Interactions Technologies) from the University of Tsukuba (Japan), in 2001, 2002 and 2012 respectively. He worked as software engineer at CEREP (USA and France) and PharmaDesign (Japan) from 2002 to 2006 and from 2006 to 2007, respectively. Since 2013, he is research scientist at RIKEN Brain Science Institute / Toyota Collaborative Center.

2013.5.17 Dr. Thomas Moulard (AIST-CNRS Joint-Robotics Laboratory, Tsukuba, Japan)
RobOptim: an Optimization Framework for Robotics

Abstract: Numerical optimization is useful for various areas of robotics. However tackling optimization problems properly requires the use of non-trivial algorithms whose tuning is challenging. RobOptim aims at providing a unified framework for different categories of optimization problems while relying on strong C++ typing to ensure efficient and correct computations. This paper presents this software, demonstrates its genericity and illustrates current use by two full scale robotics examples.
This presentation describes how a conventional linear controller can be enhanced by a combined control strategy of the inverse dynamic compensation via simulation (IDCS) technique and a novel, feedforward adaptive controller, called the reference-state minimal control synthesis (RSMCS) algorithm. Furthermore, the development of a new shaking-table configuration, called a multi-stage shaking-table which is used to generate a high-acceleration table motion, is also introduced. Comparative implementation tests show that significant performance improvements are achieved with the new control scheme, in spite of non-linearities and parameter changes in the controlled system.

Biography: Thomas Moulard (トマ ムラー) is JSPS fellowship at CNRS-AIST JRL since November 2012. He graduated from the EPITA engineering school in 2008. He prepared his thesis under the supervision of Florent Lamiraux and obtained his Ph.D. in 2012 from the INP Toulouse (Institut National Polytechnique). His research interests are: humanoid robotics, numerical optimization, robotics architecture and system integration.

2013.4.18 Double seminar
Part 1: Toshiaki Hatano (University of Bristol, Department of Mechanical Engineering, Advanced Control and Test Laboratory, UK)
The Adaptive Reference-State Minimal Control Synthesis Algorithm – with application to the control of a multi-stage shaking-table

Abstract: Currently, conventional linear control schemes (e.g. PID and TVC) are widely used on many shaking-tables, but their control performance can significantly deteriorate in the face of dynamic uncertainties, non-linearities and external disturbances. Generally, a feedback control strategy is employed to compensate such disturbances and improve the closed-loop stability of the system. In practice, however, many shaking-table facilities have some controller design restrictions: primarily, performance enhancement within the feedback-loops is explicitly not allowed due to confidentiality and/or safety reasons.
Beside these issues, a shaking-table system which can generate a large amplitude acceleration (more than ~20m/s2) earthquake wave is in a high demand, due to recent earthquakes recorded in Japan being at such levels of intensity. Nevertheless, most large shaking-table cannot reproduce this level of acceleration due to the mechanical limitations in their actuation systems.
This presentation describes how a conventional linear controller can be enhanced by a combined control strategy of the inverse dynamic compensation via simulation (IDCS) technique and a novel, feedforward adaptive controller, called the reference-state minimal control synthesis (RSMCS) algorithm. Furthermore, the development of a new shaking-table configuration, called a multi-stage shaking-table which is used to generate a high-acceleration table motion, is also introduced. Comparative implementation tests show that significant performance improvements are achieved with the new control scheme, in spite of non-linearities and parameter changes in the controlled system.


Biography: Toshiaki Hatano is a final year PhD student at the University of Bristol, under the supervision of Professor David Stoten. His research interests include: Adaptive control, Demand modification technique, Data fusion (composite filter), Shaking-table control, Control synthesis of real-time dynamically substructured systems (DSS).

Part 2: Dr. Ryuta Enokida (University of Bristol, Department of Mechanical Engineering, Advanced Control and Test Laboratory, UK)
Dynamic Substructuring System for base isolated structures with rubber bearings

Abstract: Base isolated structures with rubber bearing are becoming popular in countries whose seismic risk is very high. To examine the seismic performance of the rubber bearings, full scale shaking table tests tend to be carried out. These types of tests are certainly ideal as an experiment, but it is not easy to realize because of financial and time issues. Therefore, an experimental methodology to evaluate the seismic performance of rubber bearings is strongly being required. We are going to develop a real online time experiment methodology for rubber bearings as an alternative to the full scale shaking table test.
This experimental methodology is to conduct a physical experiment only for rubber bearings and consider the influence of the superstructure in a numerical simulation. Realizing the dynamic interaction between physical experiment and numerical simulation within 0.001s, this experimental methodology can be equivalent to the shaking table test for base isolated structures. Although this dynamical interaction has been considered impossible for many years, Professor Stoten at University of Bristol recently has proposed Dynamic Substructuring System (DSS) to achieve the very fast interaction. We apply this method for the development of a real online time experiment methodology for rubber bearings.

Biography:Ryuta Enokida (榎田竜太) received the B. A. Sc. degree in Architecture and Architectural Engineering from Kansai University in 2007 and M. Eng. and Ph. D. degree in Architecture and Architectural Engineering from Kyoto University in 2007 and 2012, respectively. From 2007 to 2012, Dr. Enokida worked with Professor Masayoshi Nakashima as a M. Eng. and Ph. D. student. After that, Dr Enokida have been working as a JSPS research fellow with Professor Takewaki at Department of Architecture and Architectural Engineering, Kyoto University and with Professor Stoten at ACTLab at the University of Bristol, UK, developing a Dynamic Substructuring System for base isolated structures with rubber bearings. His research interests include earthquake engineering, structural engineering and control engineering.

2013.3.13 Prof. Dana Kulic (Waterloo University, Adaptative Systems Laboratory, Canada)
Human Behavior Analysis.  Applications for Robotics, Automated Rehabilitation and Driver Modeling

Abstract: Human behaviour measurement and analysis are challenging problems, due to issues such as sensor and measurement system limitations, high dimensionality, and spatial and temporal variability. Automation of behviour measurement and analysis would enable many applications, including imitation learning for robotics, automated rehabilitation monitoring and assessment, and driving and driver assistance systems. In this talk we will describe recent work in the Adaptive Systems Laboratory at the University of Waterloo developing techniques for fully automated human motion measurement and analysis. We will overview techniques for motion measurement from both stationary and ambulatory sensors. Approaches for designing the appropriate motion representation and abstraction will be discussed. Next, an approach for on-line, incremental learning of whole body motion primitives and primitive sequencing from observation of human motion will be developed. The application of the developed techniques to humanoid robot learning, sports training and rehabilitation and driving data analysis will be described. The talk will conclude with an overview of preliminary experimental results and a discussion of future research directions.

Biography: Dana Kulic (クリチ ダナ) received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2002 to 2006, Dr. Kulic worked with Dr. Elizabeth Croft as a Ph. D. student and a post-doctoral researcher at the CARIS Lab at the University of British Columbia, developing human-robot interaction strategies to quantify and maximize safety during the interaction. From 2006 to 2009, Dr. Kulic was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan, working on algorithms for incremental learning of human motion patterns for humanoid robots. Dr. Kulic is currently an Assistant Professor at the Electrical and Computer Engineering Department at the University of Waterloo, Canada. Her research interests include robot learning, humanoid robots, human-robot interaction and mechatronics.

2012.11.22 Emel Demircan (Stanford University, Artificial Intelligence Laboratory, Stanford, USA)
Robotics-based Reconstruction and Synthesis of Human Motion

Abstract: Understanding human motion requires accurate modeling of the kinematics, dynamics, and control of the human musculoskeletal system to provide the bases for the analysis, characterization, and reconstruction of our movements. In motion analysis, we present methodologies to characterize human postural behaviors and dynamic skills in a unified framework including the task, posture, contact with the environment and physiological capacity. We develop human performance metrics and use the information given by the musculoskeletal models mapped into the motion of the human in a task-oriented simulation framework. In motion control and synthesis, we present algorithms for redundancy resolution and real-time control of the musculoskeletal system. For muscle redundancy resolution, we use hybrid electromyography and conventionally computed muscle control methods and apply them for dynamic simulations of human movement. Robotics-based reconstruction and synthesis provide a basis for understanding natural human motion and the tools applicable for efficient robot control, human performance prediction, or synthesis of novel motion patterns in the areas of robotics research, athletics, rehabilitation, physical therapy and computer animation.

Biography: Dr. Emel Demircan is a post-doctoral scholar working with Professor Oussama Khatib in the Robotics Research Laboratory in the Computer Science department at Stanford University. Her research focuses on the application of dynamics and control theory for the simulation and analysis of biomechanical and robotic systems. Her research interests include human motion dynamics, control and simulation; sports biomechanics; robotics for rehabilitation; and motion analysis for physiotherapy exercises. Emel received both her Ph.D. and her M.S. in Mechanical Engineering from Stanford University and her B.S. in Mechanical Engineering and in Industrial Engineering from Robert College.

2012.07.24 Hideki Kadone (Tsukuba University, Center for Cybernics, Tsukuba, Japan)
Synergies in human motion at the base of intelligence, expression of emotion, and motion assistive robotics

Abstract: Synergy in human motion is based on kinetic coordination of body segments and muscles. Synergy is important not only in motion control but also in design of intelligence for robots in terms of representation and communication. A systematic attempt to program this function in robots will be first described. Next another case will be represented where modification of synergies play an important role, which is emotion. Emotional gait patterns are examined from both of the motion generation and perception sides in humans. In the end, application of synergy to motion assistive robotics will be presented with some examples. For the assistive robots to behave in coordination with human body, the inherent coordination existing in human has to be modeled and implemented into the controllers of these robots.

Biography: Hideki Kadone (門根秀樹) is currently an assistant professor in the Center for Cybernics Research project in University of Tsukuba. He has Phd in information science and technology from the University of Tokyo in 2008. Through his education and experience as post-doc in College de France, he has pursued motion analysis and its application in intelligent information processing system for robots, human movement physiology, and human movement assistive robotics.

2012.05.21 Qin Zhang (TUAT, GV Lab, Tokyo, Japan)
Evoked EMG-based torque prediction for muscle fatigue tracking and closed-loop torque control in functional electrical stimulation

Abstract: Functional electrical stimulation (FES) is a potential technique to provide active improvement to spinal cord injured (SCI) patients in terms of mobility, stability and side-effect prevention. FES-elicited muscle force is required to be appropriate and persistent to perform intended movement or maintain a posture balance. However, muscle state changes such as muscle fatigue degrade the performance of FES. In addition, most of complete SCI patients don’t have sensory feedback to detect the fatigue and in-vivo joint torque sensor is not available yet. Conventional FES control systems are either in open-loop or not robust to muscle state changes. Therefore, the development of joint torque prediction and feedback control method is important to enhance the joint torque control of FES in terms of accuracy, robustness, and safety to the patients.

Biography: Qin Zhang (張琴)is now a research fellow at TUAT awarded by JSPS. She received the M.S. degree in graduate school of engineering at Huazhong University of Science and Technology in 2003, and the PhD degree in automatic and microelectronic systems at Montpellier University II in 2011. She was a lecturer at Wuhan Institute of Technology 2003-2008. Her research interests include biosignal processing and application, rehabilitation technology.

2012.02.17 Kanako Miura (AIST, Humanoid Research group, Tsukuba, Japan)
HRP-4C: a humanoid with appearance similar to humans, and generating its motions

Abstract: It is so hard to figure out the practical application of current biped humanoid robots. It is probable that one practical application for them is the entertainment industry such as exhibitions and fashion shows, provided the robots can move very realistically like humans. "HRP-4C" was developed for such use, which has the appearance and shape of a human being, can walk and move like one.
In the course, several studies to realize HRP-4C to move like a human (or a woman) will be introduced, including walking, and turning using its toes. Referring human motions obtained by motion capture system, the motion of HRP-4C is generated. However there are difficulties to manage its balance and “human-likeness” simultaneously. So the approaches to solve such problems will also be presented.

Biography: Kanako Miura(三浦郁奈子) is a researcher at the National Institute of Advanced Industrial Science and Technology (AIST). She received her PhD in information science from Tohoku University in 2004 and in “Electronique, Electrotechnique, Automatique” from Universite Strasbourg 1 (France) in 2004. She was a Pos-doc researcher in the fields of visual servoing in Tohoku University 2004-2005, and she has also worked on a bilateral control system and an augmented reality system in NTT Docomo 2005-2007, then she became a researcher of Humanoid Research Group in AIST. Her research interests include human movement analysis and motion planning for humanoid robots.

2011.11.14 Ritta Baddoura (Universite Paul Valery - Montpellier 3 ? CRISES: Interdisciplinary Research Center in Human and Social sciences, France)
Being familiar with the robot: emotional and mental expressions of familiarity in a human interacting with a robot

Abstract: In this research we are interested in the most basic emotional and mental states experienced by a human sharing a common space ? be it public or private- with a robot. In this perspective, the concept of familiarity can provide us with an interesting and essential approach to the various applications and uses of human/ robot interaction. Even though most of us have already experienced (and are familiar with) being familiar or unfamiliar with a person, object, animal or place; familiarity is still a very poor concept theory wise, and lacks a clear and solid definition. This research project aims to learn more about familiarity and get an insight on the way it’s experienced and recognized by a person: is it identified as an emotional state or/and as a mental one? It is also about understanding which aspects of the robot’s non verbal communication: gait, movement, physical appearance, sound … are perceived as familiar or unfamiliar and how does this perception influence human feelings of security, empathy and -on a more observable level- their readiness to interact and cooperate with the robot. The research will be lead with a quantitative and a qualitative approach.

Biography: Ritta Baddoura is a psychologist and a writer. She received her Vocational Master degree in clinical & pathological psychology from St-Joseph University, Lebanon in 2006 and her Research Master degree in psychoanalysis & aesthetics from Universite Montpellier 3, France in 2009. She has worked in the fields of child and adolescent psychology, addiction & harm reduction, gender studies, art therapeutic mediation and post traumatic stress disorder. She is currently enrolled in a PHD program in psychology (3rd year) at Montpellier University 3. Her thesis tackles Human Robot Interaction and studies: the human desire involved in building androids, the robot as an ancient and futuristic figure, and the emotional dimension of HRI. Baddoura’s work is rooted in a creative and multidisciplinary approach to research and finds inspiration in her literature and art practice.

2011.09.16 Tomomichi Sugihara (Osaka University, Motor Intelligence Laboratory, Japan)
Dynamics morphing toward robust and autonomous biped control

Abstract:"Dynamics morphing" is a paradigm to design an autonomous, namely, non-time-slaved controller for various motor skills which require complicated body manipulation under continuously or discontinuously varying dynamical constraints. Bipedalism is a typical example of such motor skills to locate and locomote the body in the world with many uncertainties in environments and events. In this talk, a unique controller which unifies biped standing, stepping and steady walking even under disturbances is introduced. A dynamically morphing property between regulatory and oscillatory behaviors enables general bipedalism without any help of pre-planned motion trajectory.

Biography: Tomomichi Sugihara is an associate professor at Department of Adaptive Machine Systems, Graduate school of engineering, Osaka University since 2010. He received PhD from University of Tokyo in 2004. He was a research associate in UT from 2004 to 2005 and became an assistant professor. He moved to Kyushu University as a program associate professor in 2007. His research interests include kinematics, dynamics computation, motion planning, control, hardware design and software development of anthropomorphic robots. He is a principal investigator of Motor Intelligence Laboratory.

2011.07.19 Mitsuhiro Hayashibe (INRIA, DEMAR project Team, Montpellier, France)
Human sensory motor system and rehabilitation robotics

Abstract: In this course, we overview the basics of human sensory motor system. How robotics can be applied for functional rehabilitation is discussed along with some examples of ongoing research.
This seminar is part of the master course on "multibody dynamics".

Biography: Mitsuhiro Hayashibe received the B.S. degree in mechano-aerospace engineering from Tokyo Institute of Technology in 1999. M.S. and Ph.D. degrees from University of Tokyo, graduate school of engineering in 2001 and 2005 respectively. He was an assistant professor at Jikei University School of Medicine, Department of Medicine, Research Center for Medical Sciences from 2001 to 2006, and a postdoctoral fellow at INRIA Sophia Antipolis and LIRMM from 2007. Since 2008, he has been a research scientist with INRIA and LIRMM, Computational Medicine and Neurosciences, DEMAR project. His research interests include modeling and identification of neuromuscular dynamics and biomechanics. He received Best Paper Award from Journal of Japanese Society for Computer-aided Surgery and CAS Young Investigator Award, Gold Prize from Hitachi Medical Systems. He is a member of IEEE Engineering in Medicine and Biology Society, and french side leader of Japan-France Integrated Action Program AYAME supported by JSPS and INRIA (2010-2012)

2011.04.25 Yuka Ariki (AIST - Digital Human Research Center, Japan)
Acquisition of common expressions between human and robots to solve problems in imitation learning

Abstract: For humanoid robots with many degrees of freedom, a considerable amount of time is required to prepare multiple motions in advance since the number of combinations of joint angle trajectories is quite large. Imitation learning is considered as a suitable approach to initialize parameters in the vast search space. However, direct use of the instructor’s motion trajectories often fails because of the difference of physical properties between the instructor and the robot. For example, a humanoid robot can falls over or hits own body by own hand if the robot directly copy the corresponding joint trajectories of an instructor’s behavior. In this talk, I introduce our approaches to deal with two major difficult problems involved in the imitation learning paradigm. 1) kinematic properties of a demonstrator and an imitator are different, 2) Dynamical characteristics of a demonstrator and an imitator are different. For the first problem, we first find a shared low-dimensional latent space between demonstrator's and imitator's postures.Then, as a result, the imitators will be able to acquire the corresponding demonstrator's movements from the shared low-dimensional latent space. For the second problem, we estimate ground reaction force from captured demonstrator’s movements so that an imitator can generate physically consistent imitated behaviors.

Biography: Yuka Ariki (有木 由香) received her M.E degree and Ph.D. degree in information science from Nara Institute of Science and Technology, Japan in 2007 and 2010 respectively under Prof. Mitsuo Kawato. From 2009 to 2010, She worked under Katsu Yamane, Jessica Hodgins as an intern student at the Disney Research Pittsburgh. She is currently a Post-doctoral Fellow at the National Institute of Advanced Industrial Science and Technology (AIST) Digital Human Research Center.

2010.10.14 Prof. Dana Kulic (Waterloo University, Adaptative Systems Laboratory, Canada)
Human Movement Observation and Analysis for enabling Humanoid Robot Learning

Abstract: As robots move to human environments, the ability to learn and imitate from observing human behaviour will become important. The talk will focus on our recent work on designing humanoid robots capable of continuous, on-line learning through observation of human movement. Learning behaviour and motion primitives from observation is a key skill for humanoid robots, enabling humanoids to take advantage of their similar body structure to humans. First, approaches for designing the appropriate motion representation and abstraction will be discussed. Next, an approach for on-line, incremental learning of whole body motion primitives and primitive sequencing from observation of human motion will be described. The second half of the talk will overview recent work on learning controllers for robot movement and imitation, and accurate human movement observation from portable sensors. The talk will conclude with an overview of preliminary experimental results and a discussion of future research directions.

Biography: Dana Kulic (クリチ ダナ) received the combined B. A. Sc. and M. Eng. degree in electro-mechanical engineering, and the Ph. D. degree in mechanical engineering from the University of British Columbia, Canada, in 1998 and 2005, respectively. From 2002 to 2006, Dr. Kulic worked with Dr. Elizabeth Croft as a Ph. D. student and a post-doctoral researcher at the CARIS Lab at the University of British Columbia, developing human-robot interaction strategies to quantify and maximize safety during the interaction. From 2006 to 2009, Dr. Kulic was a JSPS Post-doctoral Fellow and a Project Assistant Professor at the Nakamura-Yamane Laboratory at the University of Tokyo, Japan, working on algorithms for incremental learning of human motion patterns for humanoid robots. Dr. Kulic is currently an Assistant Professor at the Electrical and Computer Engineering Department at the University of Waterloo, Canada. Her research interests include robot learning, humanoid robots, human-robot interaction and mechatronics.

2010.07.22 Dr. Claire Dune (Joint Robotics Laboratory (AIST/CNRS), Tsukuba)
Visual servoing for robot walk control

In this talk, I will present a visual servoing scheme to control the walk of a humanoid robot using the on board camera. Though most of the existing methods use an perception-plannification-execution scheme, we propose to close the control loop to be robust to error in the model (such as the sliding of the feet on the ground, the compliance of the ankles and neck, calibration error,etc..). The basic idea is to control the robot motion so that the current image features match some desired image features. First I will present the specificities of our control of the HRP2 walking motion. And we will see that the use of the information extracted from the on board camera is challenging due to the peculiar motion induces by this walk. Then I will present our visual control that is based on a 3D visual servoing control scheme. After a short introduction of visual servoing, I will briefly introduce an overview of generalised inverse kinematics to handle several tasks at the same time : for example looking for an object while walking towards it. Finally I will conclude on the difficulties that still need to be solved.

Claire Dune (デゥヌ クレル)In september 2005, I graduated from the Ecole Nationale Superieure de Physique de Strasbourg with major in electrical engineering and computer vision. I also received a master degree in optics, image processing, and robotics from the Universite Louis Pasteur, Strasbourg. Then I received a grant from the CEA and the Brittany council to do PhD on a vision based grasping tool for the disabled with Eric Marchand. This project was a collaboration between the INRIA- Rennes Bretagne Atlantique and the CEA-LIST. My PhD defense took place in April 2009 at the Universite de Rennes 1. Since july 2009 I have been research associate at the JRL lab in Tsukuba with a 1 year fellowship of the Japan Society for Promotion of science.

2009.07.21 Dr. Carson Reynolds (University of Tokyo, Ishikawa laboratory)
Meta-perception: bodily interfaces

Sensors, actuators, wearable computers and wearable robots can do more than simply observe our bodies; these devices can alter and manipulate our perceptions. The Haptic Radar and other research projects are presented as examples of devices that act on phenomena related to the process of perception. Moreover, robotic and sensor systems which move reflexively are discussed.

Carson Reynolds (レノーヅ カーソン)is a project assistant professor in the Department of Creative Informatics of the University of Tokyo. He is co-founder of the Meta-Perception research group which investigates methods for capturing and manipulating information that is normally inaccessible to humans and machines. His work has been discussed (online) in Wired, Make, Engadget, New Scientist, Smart Mobs, and Slashdot (in print) in Boston Globe, Washington Times, the German weekly Focus and (broadcast) on National Public Radio in the US, The Discovery Channel’s Daily Planet and Nippon TV World’s Best Lectures. He holds a Doctor of Philosophy and Master of Science from the Massachusetts Institute of Technology; his research work there was performed at the Media Laboratory in the Affective Computing Group.

 

 

 

 

copyright Gentiane Venture 2009