Further, , utilizing the 3D/4D AI perception through camera feeds, we demonstrate a proof-of-concept cobot module for LabOS, as in (Fig. 3g, Supp. Video 2). This module enablesscientists to run time-consuming and/or repeated portions of the protocol using an adaptive, andspatially aware robotic arm. We demonstrate example protocols loaded in LabOS, such asvortexing, 96-well plate operations, tube handling on incubator/shaker, can be handed off to thecobot to reduce the load on scientists, where agentic and embodied AI with spatial intelligenceprovide a seamless human-AI-robotics collaboration experience (Supp. Video 3).
I really like the idea of the xr glasses collecting data that can then be used to inform the world model for the co-bot, fascinating approach!