Abstract image of a woman's hand accessing the digital world from the real world
16 Oct 2025

Cyber Meets Safety – Securing Intelligent Systems by Design

This is the third entry in a four-part series exploring how the EU’s evolving regulatory framework is reshaping the way we design, validate, and manage autonomous and intelligent systems.

In traditional machine safety, most risks were mechanical, electrical, hydraulic, or pneumatic, things you could see, measure, or isolate physically. But with robots and autonomous systems becoming increasingly software-driven, connected, and AI-powered, another layer of risk has emerged, cyber interference that impacts physical behaviour.

The new EU Machinery Regulation makes it clear, especially for autonomous machines. This includes not just IT-style attacks, but any unauthorised or unintended digital input that could influence how a machine behaves, including how it moves, stops, senses, or makes decisions. If a robot’s motion logic can be altered remotely, that’s not just a data breach, it’s a safety hazard.

In many modern robots, including mobile robots and collaborative arms, movement is no longer controlled by simple I/O logic or static programs. Instead, behaviour may be shaped by cloud-based software, real-time AI inference, or sensor fusion from multiple sources. These systems often include actuators powered by hydraulics or pneumatics, and if the digital control of those systems is compromised, the consequences can be physically dangerous.

Imagine a mobile robot that uses AI to navigate dynamically and adjusts its speed based on proximity data. If its decision logic is tampered with, even slightly, it might misjudge clearance or fail to stop. Considering a robot arm that uses pneumatic control but relies on learned behaviour for trajectory planning. If the software update pipeline is compromised, the robot may act in a way that bypasses safety boundaries entirely.

The regulation now expects manufacturers or any party responsible for modifications to address cyber risk across the entire system. That includes:

  • Protecting firmware, configuration files, and AI models
  • Securing interfaces between AI software and actuators (including hydraulic/pneumatic)
  • Controlling and logging remote updates or learning processes
  • Ensuring user access control and authentication are in place for safety-critical functions

This has real implications for compliance teams. Traditional functional safety tools don’t always map cleanly to these use cases. For example, how do you validate a robot’s evolving decision-making under ISO 13849, when the behaviour might change post-deployment? How do you handle safety categorisation if a model update affects braking distance or response time?

In my view, the big challenge is that many safety teams are still learning how to talk to cybersecurity and AI teams and vice versa. We’re all responsible for the same outcome (a safe system), but the tools, language, and assumptions are very different. With robots, the stakes are higher because they are not only mobile or forceful they’re increasingly autonomous, and that means less predictable.

Documentation is another critical area. I expect Notified Bodies to request detailed records of:

  • Digital change management
  • Safety risk assessments for AI/ML components
  • Hardening of access paths and interfaces
  • How software controlling hydraulic or pneumatic systems is verified and secured

We’re not just engineering movement anymore. We’re engineering trust in behaviour, updates, logic, and interaction. And that trust now must be protected in code, not just in circuits and cylinders.

In the final blog, I’ll explore what happens when systems are modified or updated and how the new framework shifts responsibility as machines become more intelligent after they leave the factory floor.

Paul Yu headshot
Paul Yu

Assistant Chief Engineer, Global Engineering

With a BEng in mechanical engineering and a postgraduate diploma in intelligent machinery, Paul has broad expertise in industrial machinery, robotics, functional safety, risk assessment, electrical equipment compliance, and machinery inspection (PUWER). He also contributes to international standards through involvement in technical committees such as UL TC 2011 (Industrial and General use Machines – US), ISO TC 299 WG2 (Service robots), MCE/3 (Safety Machinery – UK), and AMT/10 (Robotics – UK).

You may be interested in...