Embodied AI Workshop
CVPR 2024

#

Attending

The Fifth Annual Embodied AI Workshop will be held Tuesday, June 18 from 8:50am to 5:30pm Pacific in conjunction with CVPR 2024.
  • The physical workshop will be held in meeting room Summit 428.
  • The physical poster session will be held in room Arch 4E posters 50-81.
  • The workshop will also be on Zoom for CVPR virtual attendees.
Remote and in-person attendees are welcome to ask questions via Slack:Please join us at Embodied AI #5!

#

Overview

Minds live in bodies, and bodies move through a changing world. The goal of embodied artificial intelligence is to create agents, such as robots, which learn to creatively solve challenging tasks requiring interaction with the environment. While this is a tall order, fantastic advances in deep learning and the increasing availability of large datasets like ImageNet have enabled superhuman performance on a variety of AI tasks previously thought intractable. Computer vision, speech recognition and natural language processing have experienced transformative revolutions at passive input-output tasks like language translation and image processing, and reinforcement learning has similarly achieved world-class performance at interactive tasks like games. These advances have supercharged embodied AI, enabling a growing collection of researchers to make rapid progress towards intelligent agents which can:

  • See: perceive their environment through vision or other senses.
  • Talk: hold a natural language dialog grounded in their environment.
  • Listen: understand and react to audio input anywhere in a scene.
  • Act: navigate and interact with their environment to accomplish goals.
  • Reason: consider and plan for the long-term consequences of their actions.

The goal of the Embodied AI workshop is to bring together researchers from computer vision, language, graphics, and robotics to share and discuss the latest advances in embodied intelligent agents. The overarching theme of this year's workshop is Open World Embodied AI: Being an embodied agent in a world that contains objects and concepts unseen during training. This theme applies the “open set” problem of many individual tasks to embodied AI as a whole. We feel that truly effective embodied AI agents should be able to deal with tasks, objects, and situations markedly different from those that they have been trained on. This umbrella theme is divided into three topics:

  • Embodied Mobile Manipulation We go places to do things, and to do things we have to go places. Many interesting embodied tasks combine manipulation and navigation to solve problems that cannot be done with either manipulation or navigation alone. This builds on embodied navigation and manipulation topics from previous years and makes them more challenging.
  • Generative AI for Embodied AI Generative AI isn't just a hot topic, it's an important tool researchers are using to support embodied artificial intelligence research. Topics such as generative AI for simulation, generative AI for data generation, and generative AI for policies (e.g., diffusion policies and world models) are of great interest.
  • Language Model Planning When we go somewhere to do something we do it for a purpose. Language model planning uses large language models (LLMs), vision-language models (VLMs), and multimodal foundation models to turn arbitrary language commands into plans and sequences for action - a key feature needed to make embodied artificial intelligence systems useful for performing the tasks in open worlds.
The Embodied AI 2024 workshop will be held in conjunction with CVPR 2024 in Seattle, Washington. It will feature a host of invited talks covering a variety of topics in Embodied AI, many exciting Embodied AI challenges, a poster session, and panel discussions. For more information on the Embodied AI Workshop series, see our Retrospectives paper on the first three years of the workshop.

Sign Up for Updates
You can unsubscribe at any time.

#

Timeline

Workshop Announced
March 29, 2024
Paper Submission Deadline
May 4th, 2024 (Anywhere on Earth)
Paper Notification Deadline
May 29th, 2024
Challenge Submission Deadlines
May 2024. Check each challenge for the specific date.
Fifth Annual Embodied AI Workshop at CVPR
Seattle Convention Center
Tuesday, June 18, 2024
8:50 AM - 6:00 PM PT
Summit 428
Challenge Winners Announced
June 18, 2024 at the workshop. Check each challenge for specifics.

#

Workshop Schedule

Embodied AI will be a hybrid workshop, with both in-person talks and streaming via zoom.
  • Workshop Talks: 8:50AM-5:30PM PT - Summit 428
  • Poster Session: 1:00PM-2:00PM PT - Arch 4E posters 50-81
Zoom information is available on the CVPR virtual platform for registered attendees.
Remote and in-person attendees are welcome to ask questions via Slack:

  • Workshop Introduction: Embodied AI
    8:50 - 9:00 AM PT
    Location: Summit 428
    Moderator - Anthony Francis
    Logical Robotics
  • Navigation & Social Challenge Presentations
    (MultiOn, HAZARD, PRS Challenge)
    9:00 - 9:30 AM PT
    • 9:00: MultiOn
    • 9:10: HAZARD
    • 9:20: PRS Challenge
  • Navigation & Social Challenge Q&A Panel
    9:30 - 10:00 AM PT
  • Invited Talk - Generative AI for Embodied AI:
    The Blueprint for Truly Generalizable Robots: Scale, Scale, and Scale
    10:00 - 10:30 AM PT
    Aniruddha Kembhavi
    AI2

    Ani Kembhavi is the Senior Director of Computer Vision at the Allen Institute for Artificial Intelligence (AI2) in Seattle. He is also an Affiliate Associate Professor at the Computer Science & Engineering department at the University of Washington. He obtained his PhD at the University of Maryland, College Park and spent 5 years at Microsoft. His research interests lie at the intersection of computer vision, natural language processing and embodiment. His work has been awarded a Best Paper Award at CVPR 2023, an Outstanding Paper Award at Neurips 2022, an AI2 Test of Time award in 2020 and an NVIDIA Pioneer Award in 2018.

    Ani will speak about his team's recent advances showing how scaling simulation data enables masterful navigation and manipulation agents who work in the real world without any adaptation or finetuning.
  • Invited Panel - Advancing Embodied AI
    Towards Seamless Integration of Perception and Action
    10:30 - 11:00 AM PT
    Stevie Bathiche
    Technical Fellow, Microsoft Applied Science Group (E+D)
    Ade Famoti
    Senior Director & Principal PM – Microsoft Research
    Ashley Llorens
    Corporate Vice President & MD – Microsoft Research
    Olivia Norton
    CTO, Sanctuary AI
    Embodied Artificial Intelligence (AI) represents a pivotal frontier in the quest to endow machines with capabilities to perceive, reason, and act in complex environments. The panel will delve into the multifaceted research landscape shaping the futur... [Expand]
  • Invited Talk - Language Model Planning:
    Foundation Models for Robotics and Robotics for Foundation Models
    11:00 - 11:30 AM PT
    Brian Ichter
    Physical Intelligence

    Brian recently founded Physical Intelligence, a company focused on scaling robotics and foundation models. Prior to that Brian was a Research Scientist at Google DeepMind on the Robotics team and received his PhD from Stanford. Generally, his research interests lie in enabling mobile robotic systems to perform complex skills and plan long-horizon tasks in real-world environments through machine learning and large-scale models.

    Foundation models have a number of properties that are promising for robotics, and robotics has a number of lessons that can help improve foundation models. This talk will cover a number of recent works along these axes, highlighting both their benef... [Expand]
  • Invited Talk - Project Aria from Meta:
    The Path to Always-on Contextual AI
    11:30 AM - 12:00 NOON PT
    Richard Newcombe
    Meta

    Richard Newcombe is VP of Research Science at Meta Reality Labs leading the Surreal team in Reality Labs Research. The Surreal team has developed the key technologies for always-on 3D device location, scene understanding and contextual AI and pioneered Project Aria - a new generation of machine perception glasses devices that provides a new generation of data for ego-centric multimodal and contextual AI research. Richard received his undergraduate in Computer Science, and masters in Robotics and Intelligent Machines from the University of Essex in England, his PhD from Imperial College in London with a Postdoc at the University of Washington. Richard went on to co-found Surreal Vision, Ltd. that was acquired by Meta in 2015. As a research scientist his original work introduced the Dense SLAM paradigm demonstrated in KinectFusion and DynamicFusion that influenced a generation of real-time and interactive systems in AR/VR and robotics by enabling systems to efficiently understand the geometry of the environment. Richard received the best paper award at ISMAR 2011, best demo award ICCV 2011, best paper award at CVPR 2015 and best robotic vision paper award at ICRA 2017. In 2021, Richard received the ICCV Helmholtz award for research with DTAM, and the ISMAR and UIST test of time awards for KinectFusion.

    In this session, Richard will share Meta's vision towards building Always-on Contextual AI. Project Aria will be introduced in the session as a research tool to gather data from users' perspectives to accelerate machine perception and AI research.
  • Lunch
    Location: Summit ExHall
    12:00 NOON - 1:00 PM PT
  • Accepted Papers Poster Session
    1:00 PM - 2:00 PM PT
    Location: Arch 4E posters 50-81
    • Posters will be in ARCH 4E, posters 50 to 81 from 1pm to 2pm on Tuesday the 18th.
    • Poster numbers are assigned to your paper number plus 50, i.e. paper 1 is poster 51.
    • Materials for attaching posters to the poster stands will be provided on-site.
    • Posters can only be put up during our allotted time.
  • Manipulation and Vision Challenge Presentations
    ManiSkill, ARNOLD, HomeRobot OVMM
    Location: Summit 428
    2:00 - 2:30 PM PT
    • 2:00: ManiSkill
    • 2:10: ARNOLD
    • 2:20: HomeRobot OVMM
  • Manipulation and Vision Challenge Q&A Panel
    2:30 - 3:00 PM PT
  • Invited Talk - Embodied Mobile Manipulation:
    In-The-Wild Robot Teaching without In-The-Wild Robots
    3:00 - 3:30 PM PT
    Shuran Song
    Stanford University

    Shuran Song leads the Robotics and Embodied AI Lab at Stanford University ( REAL@Stanford ). She is interested in developing algorithms that enable intelligent systems to learn from their interactions with the physical world, and autonomously acquire the perception and manipulation skills necessary to execute complex tasks and assist people.

    Shuran Song will be speaking on Embodied Mobile Manipulation.
  • Invited Talk - Embodied Mobile Manipulation:
    Towards Home Robots: Open Vocabulary Mobile Manipulation in Unstructured Environments
    3:30 - 4:00 PM PT
    Chris Paxton
    Hello Robot

    Chris Paxton is a roboticist who has worked for FAIR labs at Meta and at NVIDIA research. He got his PhD in Computer Science in 2019 from the Johns Hopkins University in Baltimore, Maryland, focusing on using learning to create powerful task and motion planning capabilities for robots operating in human environments. His work won the ICRA 2021 best human-robot interaction paper award, and was nominated for best systems paper at CoRL 2021, among other things. His research looks at using language, perception, planning, and policy learning to make robots into general-purpose assistants. He's now leading embodied AI at Hello Robot to build practical in-home mobile robots.

    Robots are increasingly an important part of our world, from working in factories and hospitals to driving on city streets. As robots move into more unstructured environments such as homes, however, we need new techniques that allow robots to perform... [Expand]
  • Invited Talk - Humanoid Robots
    Foundation Models for Humanoid Robots
    4:00 - 4:30 PM PT
    Eric Jang
    1X Technologies

    Eric leads the AI team at 1X Technologies, a vertically-integrated humanoid robot company. His research background is on end-to-end mobile manipulation and generative models. Eric recently authored a book on the future of AI and Robotics, titled “AI is Good for You”.

    1X's mission is to create an abundant supply of physical labor through androids that work alongside humans. In this talk we'll be sharing an exciting new project.
  • Invited Speaker Panel
    4:30 - 5:30 PM PT
    Claudia Perez D'Arpino
    NVIDIA
  • Workshop Concludes
    5:30 PM PT

#

Sponsor Events

The Embodied AI Workshop is proud to highlight the following events associated with our sponsors:

  • Meta: top by Meta's Expo Booth #1423 from 6/19-6/21 to see how Project Aria powers machine perception and AI research.
  • Microsoft: Check out Microsoft's Expo Booth #1445 from 6/19-6/21 to see Microsoft's latest advances!

#

Challenges

The Embodied AI 2024 workshop is hosting many exciting challenges covering a wide range of topics such as rearrangement, visual navigation, vision-and-language, and audio-visual navigation. More details regarding data, submission instructions, and timelines can be found on the individual challenge websites.

The workshop organizers will award each first-prize challenge winner a cash prize, sponsored by Logical Robotics and our other sponsors.

Challenge winners may be given the opportunity to present during their challenge's presentation at the the workshop. Since many challenges can be grouped into similar tasks, we encourage participants to submit models to more than 1 challenge. The table below describes, compares, and links each challenge.

Challenge
Task
2024 Winner
Simulation Platform
Scene Dataset
Observations
Action Space
Interactive Actions?
Stochastic Acuation?
ARNOLDLanguage-Grounded ManipulationRobot AIIsaac SimArnold DatasetRGB-D, ProprioceptionContinuous
HAZARDMulti-Object RescueTBDThreeDWorldHAZARD datasetRGB-D, Sensors, TemperatureDiscrete
HomeRobot OVMMOpen Vocabulary Mobile ManipulationUniTeamHabitatOVMM DatasetRGB-DContinuous
ManiSkill-ViTacGeneralized Manipulation / Vision-Based Tactile ManipulationIllusionSAPIENPartNet-Mobility, YCB, EGADRGB-D, Proproioception, LocalizationContinuous / Discrete for ViTac
MultiONMulti-Object NavigationIntelliGOHabitatHM3D SemanticsRGB-D, LocalizationDiscrete
PRSHuman Society IntegrationPDAPRS EnvironmentPRS DatasetRGB-D, Sensors, Pose Data, Tactile SensorsContinuous

#

Call for Papers

We invite high-quality 2-page extended abstracts on embodied AI, especially in areas relevant to the themes of this year's workshop:

  • Open-World AI for Embodied AI
  • Generative AI for Embodied AI
  • Embodied Mobile Manipulation
  • Language Model Planning
as well as themes related to embodied AI in general:
  • Simulation Environments
  • Visual Navigation
  • Rearrangement
  • Embodied Question Answering
  • Embodied Vision & Language
Accepted papers will be presented as posters or spotlight talks at the workshop. These papers will be made publicly available in a non-archival format, allowing future submission to archival journals or conferences. Paper submissions do not have to be anononymized. Per CVPR rules regarding workshop papers, at least one author must register for CVPR using an in-person registration.

The submission deadline is May 4th (Anywhere on Earth). Papers should be no longer than 2 pages (excluding references) and styled in the CVPR format.

Note. The order of the papers is randomized each time the page is refreshed.

#

Sponsors

The Embodied AI 2024 Workshop is sponsored by the following organizations:

Logical RoboticsMicrosoft
Project Aria
Project Aria

#

Organizers

The Embodied AI 2024 workshop is a joint effort by a large set of researchers from a variety of organizations. Each year, a set of lead organizers takes point coordinating with the CVPR conference, backed up by a large team of workshop organizers, challenge organizers, and scientific advisors.
Anthony Francis
Logical Robotics
Claudia Pérez D’Arpino
NVIDIA
Luca Weihs
AI2
Ade Famoti
Microsoft
Angel X. Chang
SFU
Changan Chen
UT Austin
Chengshu Li
Stanford
David Hall
CSIRO
Devon Hjelm
Apple
Joel Jang
U Washington
Lamberto Ballan
U Padova
Matt Deitke
AI2, UW
Mike Roberts
Intel
Naoki Yokoyama
GaTech
Oleksandr Maksymets
Meta AI
Ram Ramrakhya
Gatech
Ran Gong
UCLA
Rin Metcalf
Apple
Sören Pirk
Kiel University
Yonatan Bisk
CMU
Angel X. Chang
SFU
Baoxiong Jia
BIGAI
Changan Chen
UT Austin
Chuang Gan
IBM, MIT
David Hall
CSIRO
Dhruv Batra
GaTech, Meta AI
Fanbo Xiang
UCSD
Hao Dong
PKU
Jiangyong Huang
Peking U
Jiayuan Gu
UCSD
Luca Weihs
AI2
Manolis Savva
SFU
Matt Deitke
AI2, UW
Naoki Yokoyama
GaTech
Oleksandr Maksymets
Meta AI
Ram Ramrakhya
Gatech
Richard He Bai
Apple
Roozbeh Mottaghi
FAIR, UW
Siyuan Huang
BIGAI
Sonia Raychaudhuri
SFU
Stone Tao
UCSD
Tommaso Campari
SFU, UNIPD
Unnat Jain
UIUC
Xiaofeng Gao
Amazon
Yang Liu
SRC-B
Yonatan Bisk
CMU
Zhuoqun Xu
PRS
Alexander Toshev
Apple
Andrey Kolobov
Microsoft
Aniruddha Kembhavi
AI2, UW
Dhruv Batra
GaTech, Meta AI
German Ros
NVIDIA
Joanne Truong
GaTech
Manolis Savva
SFU
Roberto Martín-Martín
Stanford
Roozbeh Mottaghi
FAIR, UW
Attending
Overview
Timeline
Workshop Schedule
Sponsor Events
Challenges
Call for Papers
Sponsors
Organizers