Over the next 5 years, "as AI becomes more robust against error, I personally expect to see more adaptability in robot applications. From self-optimization to advanced predictive circumvention of process and tool failures, I expect AI to principally impact the performance of processes already automated with robots." - Jeremy Marvel, Research Scientist & Project Leader, National Institute of Standards and Technology

Recent years have witnessed the birth of a new era in industrial robotics in which collaborative systems, designed to work safety beside the human workforce, are integrated into historically manual processes. Such technologies represent a relatively low-risk gateway solution to transitioning facilities and operations to a state of partial automation, but retain many of the characteristics of their non-collaborative predecessors.

At the AI in Industrial Automation Summit in San Francisco next week (June 28 - 29), Jeremy Marvel will be presenting his most recent work at NIST, discussing performance metrics for AI in manufacturing HRI. We caught up with Jeremy in advance of the summit to learn a bit more about his current research and work in the field.

Give me an overview of your work at NIST

I am a research scientist at the U.S. National Institute of Standards and Technology (NIST), and am leading a team of researchers to develop the metrology (i.e., test methods, metrics, and measurement systems) to assess and assure the collaborative performance of manufacturing robot systems.  These efforts are targeted toward both human-robot and robot-robot collaborations, includes functions such as 1) collaborative robot safety, 2) coordinating motions in time and space to minimize forces and pressures, 3) communicating high-level, task-relevant information, and 4) maintaining situation awareness of both the robot and the human operator.

What are the main challenges in your current work and why do they exist?

A large portion of our work is focused on developing traceable and repeatable test methods and metrics to evaluate technologies and aspects of robot systems that either have hereunto defied measurement, or are only just beginning to emerge as viable, marketable solutions.  The purpose of this metrology is to establish performance benchmarks, drive innovation and technology advancement, and provide mechanisms by which consumers can directly gauge solutions against their unique requirements. Such efforts require the coordination and cooperation of multiple stakeholders—including researchers, manufacturers, integrators, and end-users of robot technologies—all of whom have different perspectives, needs, and end-goals.  This diversity necessarily leads to some challenges when seeking to establish consensus, but also significantly strengthens the resulting metrology once such consensus has been reached.

How are you using AI for a positive impact?

Machine learning and AI are tools we frequently use for system and process modeling, parameter optimization, and sensor fusion (i.e., combining multiple, seemingly disparate sensing technologies to gain insights about the world not directly measurable with a single sensor).  AI can be a powerful tool to autonomously improve the performance of complex systems. Making sense of the influences and uncertainties associated with the input data, algorithm selection, and tuning parameters can be tricky. So we design and perform tests to quantify these factors.  We then thoroughly document our approaches, describing in detail the algorithms and parameters we used, and then use extensive testing to identify and measure the uncertainties associated with using these technologies to highlight the strengths, the weaknesses, and the expected performances associated with using such approaches.  All of this information is then freely and publicly disseminated as white papers, conference papers, and archival journal articles. In some cases, we will also release source code or executable software libraries so others can test and evaluate our algorithms in their own environments. We do this to help make the adoption of advanced, collaborative technologies—particularly by small- and medium-sized manufacturers—both easier and less of a gamble.

What challenges have you had with implementing AI for human-robot interaction, and how have you overcome these?

A significant limitation of AI is that it is nondeterministic in that the performance and quality of the output product cannot be accurately predicted. In manufacturing environments, this uncertainty, especially when paired with human-robot interactions, can result in undesirable situations.  The only way to avoid such situations is through a thorough risk assessment paired with extensive testing to identify and characterize errant behaviors. From this, we document the conditions that resulted in the potential hazard, specify appropriate safeguards to mitigate the risks, and re-assess the risks.  The results of this risk assessment process are often shared with the national and international standards bodies to raise public awareness.

How do you see AI changing the landscape of robotics in the next 5 years?

From a human-robot interaction perspective, I anticipate AI advances leading to more responsive safety systems capable of identifying and appropriately responding to human operators. Moreover, I expect to see a broad spectrum of applications programmed using teach-by-example. On a longer horizon, say 10-15 years, I expect to see AI driving supportive collaborations with human operators in which the robot and person must coordinate their motions and balance their capabilities to accomplish a shared task objective.

Which other industries are you most interested to see benefiting from AI in the next 5 years

Outside of collaborative manufacturing automation, I, personally, am most interested in seeing how AI can benefit education and child development, especially in the fields related to STEM.  Interactive, educational tools that adapt to children as they learn in school and at home would be extremely beneficial for our youth. Such tools could provide individualized lesson plans to address specific needs or developmental goals, and would be an invaluable resource for our nation’s teachers and educational institutions.

There's so much discussion around the privacy and security issues that come along with the applications of AI systems, do you think it's a concern, and what should we do to ensure systems are safe?

I view AI as a tool.  As with any tool, it ultimately comes down to how AI is used that determines whether or not there is cause for concern.  In my field of research, the biggest issues of privacy and security are centred around the collection, storage, and representation of information.  Human data such as work performance and process activities, for instance, must be captured and recorded in a way that the person’s identity (including any traits that could be used to infer the person’s identity) is anonymized.  Security is a larger challenge. Having to do more with information integrity rather than the protection of sensitive information, there exists the potential for large swaths of data to be corrupted or processes led astray by malfunctioning AI.  In safety-related functions, such errors could prove dangerous. As such, system redundancy and software/hardware checks are often required to ensure the integrity of system safety and process functionality.

Keen to learn more from Jeremy? Join us next week in San Francisco to meet global leaders in the field. There are a limited number of tickets remaining, so register now to guarantee your place at the summit. Additional confirmed speakers include: Greg Kinse: Hitachi, Shameer Mirza: PepsiCo, Benjamin Hodel: Caterpillar Inc., Alicia Kavelaara, Offworld, and many more.