A vision-based control system called Neural Jacobian Fields enables soft and rigid robots to learn self-supervised motion control using only a monocular camera. The system, developed by MIT CSAIL researchers, combines 3D scene reconstruction with embodied representation and closed-loop control.
MIT roboticists developed a way to cut through data noise and help robots focus on the features in a scene that are most relevant for assisting humans. The system could be used in smart manufacturing and warehouse settings where robots would work alongside and assist humans.
A hopping, insect-sized robot can jump over gaps or obstacles, traverse rough, slippery, or slanted surfaces, and perform aerial acrobatic maneuvers, while using a fraction of the energy required for flying microbots.
Robot, know thyself: New vision-based system teaches machines to understand their bodies Neural Jacobian Fields, developed by MIT CSAIL researchers, can learn to control any robot from a single camera, without any other sensors.
The word “robot” was coined by the Czech writer Karel Čapek in a 1920 play called Rossum’s Universal Robots, and is derived from the Czech robota, meaning “drudgery” or “servitude”.
Founded by MIT alumni, the Pickle Robot Company has developed robots that can autonomously load and unload trucks inside warehouses and logistic centers.
SPROUT is a flexible robot built by MIT Lincoln Laboratory and Notre Dame researchers to assist in disaster response. Emergency responders can use the robot to navigate and map areas under rubble to plan rescue operations.
MIT Associate Professor Luca Carlone works to give robots a more human-like perception of their environment, so they can interact with people safely and seamlessly.
The robot consists of a heavy, 220-pound base whose dimensions and structure were optimized to support the weight of an average human without tipping or slipping. Underneath the base is a set of omnidirectional wheels that allows the robot to move in any direction without pivoting, if needed.
A new training interface allows a robot to learn a task in several different ways. This increased training flexibility could help more people interact with and teach robots — and may also enable robots to learn a wider set of skills.