Google DeepMind’s Gemini ER 1.6 Enhances Spot Inspections

Google DeepMind released Gemini Robotics-ER 1.6 to improve spatial reasoning and hazard detection for industrial robots; Boston Dynamics added it to Spot Orbit AIVI-Learning on April 8.

Google DeepMind launched Gemini Robotics-ER 1.6, an AI model built to improve spatial reasoning, task planning and hazard detection for industrial robots. Boston Dynamics integrated the model into its Orbit AIVI-Learning platform for Spot robots, with the update going live for customers enrolled in the program on April 8.

Gemini Robotics-ER 1.6 focuses on spatial understanding, task planning and success detection so robots can perform complex physical tasks in real environments. In internal tests against Gemini 3.0 Flash, the model identified safety hazards 6% better in text-based scenarios and 10% better in video-based scenarios. Google DeepMind reported additional gains in spatial and physical reasoning compared with earlier versions.

The model can read complex instruments, including gauges and sight glasses. That capability was developed in collaboration with Boston Dynamics to meet inspection needs such as verifying fluid levels and pressure readings during maintenance and safety checks.

Google DeepMind has made the model available to developers through the Gemini API and Google AI Studio. Boston Dynamics said the integration with Orbit AIVI-Learning enables Spot to carry out inspection, monitoring and routine maintenance tasks with greater independence, reducing the frequency of direct human oversight in many scenarios.

Marco da Silva, vice president and general manager of Spot at Boston Dynamics, described the update as enabling Spot to “see, understand, and react to real-world challenges completely autonomously.” The company provided the quote as part of its announcement about the software roll-out.

The combined hardware and software is intended for inspections at construction sites and industrial facilities where Spot already operates. The enhancements aim to let robots follow inspection protocols, flag hazards and report pass/fail checks without an operator interpreting raw sensor data.

Google DeepMind and Boston Dynamics say the integration pairs the model’s perception and planning capabilities with an established mobile robot platform and that developer tools are meant to support system integrators and third-party software providers.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.

Articles by this author