Research Floor

The partner universities of the CoAI JRC have an extensive array of resources to enable state of the art research and innovation in cooperative and cognition-enabled AI and robotics. We are committed to making these resources available to the broader community, including offering digital twins in our Virtual Research and Training Building wherever feasible. Click on the links below to find out more about what we currently have to offer.

Members of the CoAI JRC contribute to a number
of exciting research challenge missions.

Our highlight:
the EASE household challenge in which 
you can participate using the 
IntEL4CoRo researcher workbench.

The field of CoAI lies at the intersection of multiple research areas, and advances in the field are achieved by conducting cutting edge research on several different but interrelated topics.

Our highlight:
the Robots Preparing Meals Project that deals with the problem of teaching robots how to execute unknown tasks in the household domain.

The CoAI JRC member universities are home to numerous research labs for conducting cutting edge research on topics that are central for cooperative and cognition-enabled AI and robotics. At the CoAI JRC we are dedicated to making these resources available to the broader research community, including offering complete digital twins of laboratories wherever feasible.

Our highlight:
in the Interactive Research Space a collection of research challenge laboratories is awaiting you!

Apart from research laboratories, CoAI JRC
members make a large and growing collection of
Open Software and Open Data resources
available to the broader research community.

Our highlight:
explore our Open Software Library
and find exciting opportunities
for your research!

Open Research Challenge Missions

Members of the CoAI JRC contribute to a number of exciting research challenge missions. You may find out more about them here:

EASE household challenge mission

You can participate in the EASE household challenge using the IntEL4CoRo researcher workbench.

The EASE household challenge concerns the design and realization of a control program that enables a robot agent to perform daily household chores. The chores will be organized in an everyday activity loop. The main scenario is to perform the daily activities three times in a “robot day”: preparing breakfast, lunch, and dinner, and keeping the apartment in order.

You can participate by writing your own robot control system or building on the foundation of the open reference control system developed in the collaborative research center EASE (Everyday Activity Science and Engineering). The reference system runs as a cloud service, a downloadable software container, or can be installed from open-source software libraries.

The challenge aims at advancing the skills and cognitive capabilities of autonomous household robots. You can read the full challenge description here.

Robots learning from instruction videos

Computational models that are general and flexible enough to accomplish everyday manipulation tasks for any object and any purpose would be a disruptive breakthrough in the creation of versatile, general-purpose robot agents. This is a grand challenge for AI and robotics.

Humans are able to accomplish tasks such as “cut up the fruit” for many types of fruit by generating a large variety of context-specific manipulation behaviors. We can typically do so on the first attempt, even with novel objects in uncertain physical conditions. This is because we can reason comprehensively about the possible consequences of different behaviors even before we begin physically interacting with the real world.

The challenge: Take categories of meal preparation tasks such as “cutting up fruit” and learn the necessary manipulation skills and task knowledge from reading instructions on the web and from watching instruction videos.

Retail robotics challenge mission

The retail robotics challenge complements the household challenge, in that both require service robots to deploy themselves. This includes acquiring environment models that they can use to perform their primary tasks, such as refilling shelves, reorganizing shelves, picking up customer orders, and assisting customers.

This goal is within technical reach because retail stores – especially those of retail chains – provide a favorable environment: They are often structured using standardized shelf systems, their products are designed to be distinctive and recognizable for customers, and their products are often also available online, which enables robots to mine webstores to obtain knowledge.

A digital twin of the EASE retail robotics laboratory will soon be available in the interactive version of the CoAI Virtual Research and Training Building.

 

Co-constructive task learning

The Co-constructive Task Learning research initiative focuses on interactions in which a robot learns a new task or skill from a human tutor in the same way human children learn from parents, which paradigmatically involves co-construction: both parent and child play an active role in shaping the meaning, goals, and structure of the learning interaction.

The objective of the research initiative is to develop an analogous co-constructive task learning approach for human-robot learning interactions. In a co-constructive approach, human tutor and robot learner give each other feedback to establish and preserve a shared understanding of the task, its purpose, and the steps needed to carry it out. You can read more about the Co-constructive Task Learning initiative here.

Question answering over linked data

An artificial system that can cooperate effectively must be able to ask its human partner questions, and it must be able to understand questions when it is asked. Originally established at Bielefeld University, the Question Answering over Linked Data (QALD) Challenge is a series of evaluation campaigns in which competitors design systems that can take questions posed in an ordinary human language, such as “Who is the mayor of New York city?” or “Which films starring Clint Eastwood did he direct himself?”, and find answers to them online in linked datasets such as Wikidata and DBpedia. Iterations of the competition focus on different sorts of questions, such as comparatives (“Who is richer, Bill Gates or Jeff Bezos?”), posed in multiple languages such as English, German, Chinese, and Russian. You can read more about the QALD Challenge here.

CoAI Research Topics

The field of CoAI lies at the intersection of multiple research areas, and advances in the field are achieved by conducting cutting edge research on several different but interrelated topics.

Cognition-enabled motion control

Current research explores how the constraint-based task specification of Giskard can be applied to complex robotic hands and whole-body grasping motions with a variety of robots. In the SUTURO student project, Giskard is applied to household tasks such as pouring cereal and picking up cutlery from a table. Another branch of current research is focused on using Giskard in closed-loop control cycles with sensor feedback or visual perception. One example is the control of force closure when picking up a box with an anthropomorphic hand.

This video shows the capabilities of Giskard, an open-source constraint and optimization-based whole-body motion planning and control framework. It includes a world representation inspired by Kineverse, which allows it to plan and execute whole-body movements for the robot that are specified as goal states for the articulated environment. The internal quadratic optimization problem includes a model predictive control scheme that allows Giskard to minimize the joint velocity, acceleration and jerk over a prediction horizon.

Robot Perception

The goal is to develop a cognition-enabled robot perception framework that can perform a range of dynamically created perception task requests needed to autonomously accomplish everyday manipulation tasks such as setting the table or making popcorn. The perception system is to operate without delaying robot action, and results must be “actionable” so that they can be translated into robot body motions.

The perception framework must also be able to monitor the manipulation actions during their execution. Finally, the it must be able to detect and recognize humans, estimate their poses, and understand their actions. Visit the website of RoboKudo for more information.

Plan-based Robot Control

The goal is to develop a cognition-enabled robot perception framework that can perform a range of dynamically created perception task requests needed to autonomously accomplish everyday manipulation tasks such as setting the table or making popcorn. The perception system is to operate without delaying robot action, and results must be “actionable” so that they can be translated into robot body motions.

CRAM (Cognitive Robot Abstract Machine) is a software toolbox for the design, implementation, and deployment of cognition-enabled autonomous robots performing everyday manipulation activities. CRAM equips autonomous robots with lightweight reasoning mechanisms that can infer control decisions rather than requiring the decisions to be preprogrammed. CRAM is available as several ROS packages and stacks, and CPL and the CPL extensions are documented here.
Robots preparing meals. This project deals with the problem of teaching robots how to execute unknown tasks in the household domain. The focus of this repository lies on the task of Preparing Meals, especially fruits and vegetables, as well as pouring liquids, amongst others.
Robot knowledge representation and reasoning

The partner universities of the CoAI JRC have several ways to help you can engage in research on knowledge representation and reasoning for cognition-enabled robotic agents.

  • The openEASE robot experiment lab, which hosts robot experiments to make experiments reproducible using the openEASE web-based knowledge service.
  • KnowRob, the knowledge representation and reasoning framework for robotics with the highest academic impact.
  • You can use and contribute to SOMA, a powerful machine-interpretable encyclopaedia or ontology for cognition-enabled robotics
Human activity interpretation and understanding

The objective of RobCog is to equip robots with skills they need to perform everyday activity tasks. To achieve this, users play virtual reality games which ask them to perform activities such as table setting, pouring, wiping, or cutting. These actions can be logged as part of an episodic memory. The memories (or NEEMs) will equip robots with commonsense and naive physics knowledge that is crucial for the ability to query and learn everyday activity tasks.

 

In the SprachSpielLabor at Paderborn University, researchers investigate (among other things) child language development, with special attention to the ways pragmatic frames help children learn and the ways pragmatic frames can be used as research tools for studying child development. Here you can learn about a recent study conducted in the lab in which pragmatic frames are exploited to provide a more nuanced and enlightening way to evaluate children’s performance in a novel word learning task.

Human-robot communication and learning

Smooth multimodal communication between humans and artificial systems is an essential component of coooperative and cognition-enabled AI. Flexdiam is a state of the art platform for managing multimodal dialog between a human and an embodied digital agent, developed by CoAI JRC members Stefan KoppHendrik Buschmeier and colleagues.

The Joint Simon Effect (JSE) in human-robot interaction is a phenomenon that suggests humans work more effectively in joint tasks with humanlike robots than mechanomorphs, mentally representing the task from their robot partner’s point of view much as they would with an ordinary human partner. Here you can find more information about the Joint Simon Effect, as well as software and detailed instructions for replicating the JSE in your own lab, a contribution to open and reproducible science.

In the SprachSpielLabor, CoAI JRC members Katharina RohlfingAngela Grimminger and colleagues investigate human-robot interaction with a special focus on how children and robots learn from each other. Here you can read more about a recent study in which children taught a robot new words by reading a children’s book together.

Knowledge Representation

eTaRDiS is a VR-based tool for the immersive exploration of knowledge graphs. The first tool of its kind, eTaRDiS provides a new technology and methodology for integrating knowledge graphs with virtual reality. This technology is foundational for the development of CoAI: photo-realistic simulations of knowledge graphs representing common sense knowledge about the affordances of objects and the goals of actions can help guide a system in cooperative joint action with a human partner.

Research Laboratories

The CoAI JRC member universities are home to numerous research labs for conducting cutting edge research on topics that are central for cooperative and cognition-enabled AI and robotics. At the CoAI JRC we are dedicated to making these resources available to the broader research community, including offering complete digital twins of laboratories wherever feasible.

You can find a collection of research challenge laboratories here. This area is still under construction. More interactive virtual labs will be added soon.

Cognition-enabled Robotics

The Institute for Artificial Intelligence at the University of Bremen has a fully equipped new research laboratory with a kitchen and furniture. There is also an exact digital image of this lab, known as a “digital twin.”

Labs at Bielefeld University for investigating everyday human manipulation actions and translating these insights into human-inspired robot control strategies

Human Cognition, Communication and Learning

A 1-bedroom apartment at Bielefeld University equipped with ambient smart devices and a service robot for conducting experiments on communication between humans, robots, and virtual agents.

Labs at Paderborn University for investigating pragmatic frames in human-human and human-robot learning interactions

A collaborative research initiative developing a new approach for robots to learn tasks and skills from human instructors using co-construction.

The Biosignals Lab focuses on focus on human-centered technologies and applications based on biosignals, such as capturing, recognizing, and interpreting speech, muscle, and brain activity.“

open resources

 

Apart from research laboratories, CoAI JRC members make a large and growing collection of Open Software and Open Data resources available to the broader research community:

From cognitive robot architectures to a fast implementation of random forests for high dimensional data, please take a look at these collected contributions to Open Software.

Ontological models, the openEASE web-based knowledge service, the HOBBIT distributed benchmarking platform, and much more can be found in this section.

Please contact us for further
information about the CoAI JRC.