#
Overview
Minds live in bodies, and bodies move through a changing world. The goal of embodied artificial intelligence is to create agents, such as robots, which learn to creatively solve challenging tasks requiring interaction with the environment. While this is a tall order, fantastic advances in deep learning and the increasing availability of large datasets like ImageNet have enabled superhuman performance on a variety of AI tasks previously thought intractable. Computer vision, speech recognition and natural language processing have experienced transformative revolutions at passive input-output tasks like language translation and image processing, and reinforcement learning has similarly achieved world-class performance at interactive tasks like games. These advances have supercharged embodied AI, enabling a growing collection of researchers to make rapid progress towards intelligent agents which can:
- See: perceive their environment through vision or other senses.
- Talk: hold a natural language dialog grounded in their environment.
- Listen: understand and react to audio input anywhere in a scene.
- Act: navigate and interact with their environment to accomplish goals.
- Reason: consider and plan for the long-term consequences of their actions.
The goal of the Embodied AI workshop is to bring together researchers from computer vision, language, graphics, and robotics to share and discuss the latest advances in embodied intelligent agents. The overarching theme of this year's workshop is Open World Embodied AI: Being an embodied agent in a world that contains objects and concepts unseen during training. This theme applies the “open set” problem of many individual tasks to embodied AI as a whole. We feel that truly effective embodied AI agents should be able to deal with tasks, objects, and situations markedly different from those that they have been trained on. This umbrella theme is divided into three topics:
- Embodied Mobile Manipulation We go places to do things, and to do things we have to go places. Many interesting embodied tasks combine manipulation and navigation to solve problems that cannot be done with either manipulation or navigation alone. This builds on embodied navigation and manipulation topics from previous years and makes them more challenging.
- Generative AI for Embodied AI Generative AI isn't just a hot topic, it's an important tool researchers are using to support embodied artificial intelligence research. Topics such as generative AI for simulation, generative AI for data generation, and generative AI for policies (e.g., diffusion policies and world models) are of great interest.
- Language Model Planning When we go somewhere to do something we do it for a purpose. Language model planning uses large language models (LLMs), vision-language models (VLMs), and multimodal foundation models to turn arbitrary language commands into plans and sequences for action - a key feature needed to make embodied artificial intelligence systems useful for performing the tasks in open worlds.
The Embodied AI 2024 workshop will be held in conjunction with
CVPR 2024 in Seattle, Washington. It will feature a host of invited talks covering a variety of topics in Embodied AI, many exciting Embodied AI challenges, a poster session, and panel discussions. For more information on the Embodied AI Workshop series, see our
Retrospectives paper on the first three years of the workshop.
#
Timeline
Workshop Announced
March 29, 2024
Paper Submission Deadline
May 4th, 2024 (Anywhere on Earth)
Paper Notification Deadline
May 13th, 2024
Challenge Submission Deadlines
May 2024. Check each challenge for the specific date.
Fifth Annual Embodied AI Workshop at CVPR
Challenge Winners Announced
June 18, 2024 at the workshop. Check each challenge for specifics.
#
Workshop Schedule
Embodied AI will be a
hybrid workshop, with both in-person talks and streaming via zoom.
- Workshop Talks: 8:50AM-5:30PM PT - TBD
- Poster Session: 1:00PM-2:00PM PT - TBD
Zoom information is available on
the CVPR virtual platform for registered attendees.
Remote and in-person attendees are welcome to as questions via Slack:
Workshop Introduction: Embodied AITBD
8:50 - 9:00 AM PTModerator - Anthony Francis
Logical Robotics
Navigation & Social Challenge Presentations(MultiOn, HAZARD, PRS Challenge)
9:10 - 10:00 AM PT- 9:00: MultiOn
- 9:10: HAZARD
- 9:20: PRS Challenge
Navigation & Social Challenge Q&A Panel
9:30 - 10:00 AM PT
Invited Talk - Generative AI for Embodied AI: TBD10:00 - 10:30 AM PTAni Kembhavi is the Senior Director of Computer Vision at the Allen Institute for AI (AI2) in Seattle, and is also an Affiliate Associate Professor at the Computer Science & Engineering department at the University of Washington. His work over two decades spans computer vision, robotics and natural language processing.
Aniruddha Kembhavi will be speaking on Generative AI for Embodied AI, especially the ProcTHOR procedural generation system.
Invited Talk - Generative AI for Embodied AI: TBD10:30 - 11:00 AM PTThis talk will discuss generative AI for Embodied AI.
Invited Talk - Language Model Planning: Title TBD11:00 - 11:30 AM PTBrian Ichter
Physical Intelligence
Brian Ichter is one of the founders of Physical Intelligence. At Google Brain, he pioneered work on language model planning for robotic control.
Brian Ichter will share his thoughts on using language models for robotic control.
Sponsor Talk - Project Aria: Augmented Reality for Embodied AI11:30 AM - 12:00 NOON PTProject Aria glasses gather information from the user’s perspective for egocentric research in machine perception and augmented reality..
Project ARIA will share some details of the use of ARIA devices in the field of embodied AI.
Lunch
Location TBD
12:00 NOON - 1:00 PM PT
Accepted Papers Poster Session
Location TBD
1:00 PM - 2:00 PM PT
Mobile Manipulation Challenge PresentationsManiSkill, ARNOLD, HomeRobot OVMM
2:30 - 3:00 PM PT- 2:00: ManiSkill
- 2:10: ARNOLD
- 2:20: HomeRobot OVMM
Mobile Manipulation Challenge Q&A Panel
3:00 - 3:30 PM PT
Invited Talk - Embodied Mobile Manipulation: Robotics and Embodied Artificial Intelligence3:30 - 4:00 PM PTShuran Song
Stanford University
Shuran Song leads the Robotics and Embodied AI Lab at Stanford University ( REAL@Stanford ). She is interested in developing algorithms that enable intelligent systems to learn from their interactions with the physical world, and autonomously acquire the perception and manipulation skills necessary to execute complex tasks and assist people.
Shuran Song will be speaking on Embodied Mobile Manipulation.
Invited Talk - Embodied Mobile Manipulation: Open Vocabulary Mobile Manipulation4:00 - 4:30 PM PTChris Paxton is a robotics research scientist the Embodied AI team at FAIR Labs. His work has looked at how we can make robots into useful, general-purposem mobile manipulators in homes.
Chris will discuss his work on enabling robots to work alongside humans to perform complex, multi-step tasks, using a combination of learning and planning. In particular, will discuss the open-vocabulary mobile manipulation challenge, or OVMM, which ... [Expand]
Invited Talk - Humanoid RobotsFoundation Models for Humanoid Robots4:30 - 5:00 PM PTEric leads the AI team at 1X Technologies, a vertically-integrated humanoid robot company. His research background is on end-to-end mobile manipulation and generative models. Eric recently authored a book on the future of AI and Robotics, titled “AI is Good for You”.
1X’s mission is to create an abundant supply of physical labor through androids that work alongside humans. I will share some of the progress 1X has been making towards general-purpose mobile manipulation. We have scaled up the number of tasks our an... [Expand]
Invited Speaker Panel5:00 - 5:30 PM PTClaudia Perez D'Arpino
NVIDIA
Workshop Concludes
5:30 PM PT
#
Demos
In association with the Embodied AI Workshop, our partners and sponsors will present demos, date and times TBD.
#
Challenges
The Embodied AI 2024 workshop is hosting many exciting challenges covering a wide range of topics such as rearrangement, visual navigation, vision-and-language, and audio-visual navigation. More details regarding data, submission instructions, and timelines can be found on the individual challenge websites.
The workshop organizers will award each first-prize challenge winner a cash prize, sponsored by Logical Robotics and our other sponsors.
Challenge winners may be given the opportunity to present during their challenge's presentation at the the workshop. Since many challenges can be grouped into similar tasks, we encourage participants to submit models to more than 1 challenge. The table below describes, compares, and links each challenge.
#
Call for Papers
We invite high-quality 2-page extended abstracts on embodied AI, especially in areas relevant to the themes of this year's workshop:
- Open-World AI for Embodied AI
- Generative AI for Embodied AI
- Embodied Mobile Manipulation
- Language Model Planning
as well as themes related to embodied AI in general:
- Simulation Environments
- Visual Navigation
- Rearrangement
- Embodied Question Answering
- Embodied Vision & Language
Accepted papers will be presented as posters or spotlight talks at the workshop. These papers will be made publicly available in a non-archival format, allowing future submission to archival journals or conferences. Paper submissions do not have to be anononymized. Per
CVPR rules regarding workshop papers, at least one author must register for CVPR using an in-person registration.
Submission
The submission deadline is May 4th (Anywhere on Earth). Papers should be no longer than 2 pages (excluding references) and styled in the CVPR format.
Accepted Papers
Note. The order of the papers is randomized each time the page is refreshed.
#
Sponsors
The Embodied AI 2024 Workshop is sponsored by the following organizations:
#
Organizers
The Embodied AI 2024 workshop is a joint effort by a large set of researchers from a variety of organizations. Each year, a set of lead organizers takes point coordinating with the CVPR conference, backed up by a large team of workshop organizers, challenge organizers, and scientific advisors.
Lead Organizers
Anthony FrancisLogical Robotics
Claudia Pérez D’ArpinoNVIDIA
Organizing Committee
Anthony FrancisLogical Robotics
Claudia Pérez D’ArpinoNVIDIA
Oleksandr MaksymetsMeta AI
Sören PirkKiel University
Challenge Organizers
Dhruv BatraGaTech, Meta AI
Oleksandr MaksymetsMeta AI
Tommaso CampariSFU, UNIPD
Scientific Advisory Board
Aniruddha KembhaviAI2, UW
Dhruv BatraGaTech, Meta AI
Roberto Martín-MartínStanford