TAI AHR #02 - AI in Hardware and Robotics
Topic
Join us for our second event on AI in robotics, featuring insights on advancements in robot learning, edge AI, and human-robot collaboration. The talks will cover innovations in contact-rich manipulation, the rise of Edge AI, and the potential of inflatable robots for safe, human-friendly interaction. A world-leading robotics expert from The University of Edinburgh and The Alan Turing Institute will discuss the balance between autonomy and user oversight, and how ML techniques such as real-time learning, shared control, and compliant actuation are driving robots to operate more autonomously while keeping humans in control.
Our Community
Tokyo AI (TAI) is a community composed of people based in Tokyo and working with, studying, or investing in AI. We are engineers, product managers, entrepreneurs, academics, and investors intending to build a strong “AI coreˮ in Tokyo. Find more in our overview: https://bit.ly/tai_overview
Schedule
18:00 Doors open
18:30 Talks start (15-25 minutes per speaker)
20:00 Networking
21:00 Event ends
Speakers
Prof. Sethu Vijayakumar FRSE (LinkedIn, Research Lab)
Title: From Automation to Autonomy - Machine Learning for Next-generation Robotics
Abstract: The new generation of robots works much more closely with humans, and other robots and interacts significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision-making systems to one that involves shared control – with significant autonomy devolved to the robot platform; end-users in the loop make only high-level decisions.
This talk will briefly introduce powerful machine learning technologies, such as robust multi-modal sensing, shared representations, scalable real-time learning and adaptation, and compliant actuation, that are enabling us to reap the benefits of increased autonomy while still feeling securely in control. This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade-off between autonomy and control that we are comfortable with?
Domains where this debate is relevant include the deployment of robots in extreme environments, self-driving cars, asset inspection, repair and maintenance, factories of the future, and assisted living technologies (including exoskeletons and prosthetics, to list a few).
Bio: Sethu Vijayakumar is a Professor of Robotics at the University of Edinburgh, UK, and the Founding Director of the Edinburgh Centre for Robotics. He has pioneered the use of large-scale machine learning techniques in the real-time control of several iconic robotic platforms such as the SARCOS and the HONDA ASIMO humanoids, KUKA-LWR robot arm, and iLIMB prosthetic hand. He had held adjunct faculty positions at the University of Southern California (USC), Los Angeles, and the RIKEN Brain Science Institute, Japan. One of his landmark projects (2016) involved a collaboration with NASA Johnson Space Centre on the Valkyrie humanoid robot being prepared for unmanned robotic pre-deployment missions to Mars. He is a Fellow of the Royal Society of Edinburgh, a judge on BBC Robot Wars, and winner of the 2015 Tam Dalyell Prize for excellence in engaging the public with science. Professor Vijayakumar helps shape and drive the UK Robotics and Autonomous Systems (RAS) agenda in his recent role as the Programme Director for Robotics and Human AI Interfaces at The Alan Turing Institute, the UK’s national institute for data science and AI. Sethu also serves as Senior Independent Director (SID) on the Japan Small Caps fund with Baillie Gifford Investment and is interested in better understanding the startup and scale-up dynamics in the unlisted companies in Japan.
—
Tatsuya Kamijo (LinkedIn, Google Scholar)
Title: Robot Learning and Control for Contact-Rich Manipulation
Abstract: Robots are expected to do various tasks autonomously. However, tasks that involve physical contact with the environment, called contact-rich manipulation tasks, are challenging for rigid robots, making it difficult to bring robots into our daily lives. In this talk, I will explore the challenges of contact-rich manipulation and introduce state-of-the-art learning-based approaches to tackle them, including our latest research effort on learning compliance control from a few demonstrations. We will also discuss the future of tactile sensing for human-level dexterity and robotic foundation model.
Bio: Tatsuya is a Master's student at Matsuo-Iwasawa Lab, the University of Tokyo, and a robotics research intern at OMRON SINIC X. His research focuses on robot learning and control for manipulation.
—
Mohammad Kia (LinkedIn, Google Scholar)
Title: A Glimpse of Edge AI in Industry and Daily Life
Abstract: In an era where AI often evokes thoughts of cloud computing, powerful servers, and high-performance processors, the quiet trend of Edge AI is revolutionizing industries and enhancing our daily lives, enabling small, energy-efficient devices to analyze data and make decisions in real time. From smart factories to everyday objects, Edge AI and AIOT create intelligent environments that can analyze and make decisions with minimal power consumption and latency. With wireless sensor networks whispering small pieces of information, these devices are transforming manufacturing, healthcare, and smart homes, offering another glimpse into the future.
This talk explores the power of embedded systems, low-power microcontroller units, and wireless sensor networks that enable AI to operate at the edge level, close to the sensors and the source of data. Understanding these technologies will reshape our perception of AI and unlock new possibilities at the very fringes of computing.
Bio: I am an embedded systems technical lead of projects with over a decade of diverse experience in robotics, mechatronics, embedded systems, and control systems. I received my MS in mechatronics and love working with bleeding-edge technologies to create state-of-the-art devices. I put software, firmware, electronics, and mechanics together to provide solutions for industrial applications. To make things happen, I either work with or lead a multidisciplinary team. I mainly develop microcontroller firmware and computer interface software and design solution architecture for embedded systems and control systems.
—
Gangadhara Naga Sai Gubbala (LinkedIn)
Title: Future of Safe Human-Robot Collaboration: Design and Control of Inflatable Robots with Deep Learning
Abstract: Inflatable robots offer a promising solution for safe human-robot interaction, particularly in elderly care and environments requiring close physical collaboration. Their soft, lightweight structure and low inertia make them ideal for tasks that involve physical contact, minimizing the risk of injury. However, predicting and controlling deformations in these flexible robots is a complex challenge. We aim to address the challenges of controlling these robots by integrating deep learning models, enabling them to perform contact-based tasks suitable for domestic assistance—such as wiping surfaces, aiding in daily chores, and adapting to their surroundings. These robots are compact and easy to deploy, making them suitable for a wide range of applications, from healthcare to space exploration.
Bio: I am a Doctor of Engineering candidate at Ogata Laboratory, Waseda University, focusing on AI and robotics, specifically the design and control of inflatable robots using deep learning. Before academia, I worked for two years as a chatbot developer in an IT company in India, where I led the development of customer-facing chatbot solutions, improving user interfaces and experiences. I envision a future where robotics plays a significant role in society, creating safe, collaborative systems that enhance daily life and interactions with technology
—