Here's the thing about medical software: a bug doesn't mean a bad user experience. It can mean someone gets the wrong drug dose, or a clinician misses a critical alert at 3am. That's a fundamentally different kind of failure than anything in consumer tech.
Regulators know this. The FDA, EU Notified Bodies, Australia's TGA — they all require documented proof that a product is safe before it gets anywhere near a patient. Not a promise. Proof.
This article breaks down how that process actually works, what the standards demand, and what separates teams that get through certification from teams that spend years stuck in it.
What's Happening in the Market Right Now
Medical tech looks nothing like it did ten years ago. Software used to live inside physical devices as a supporting layer. Now it often is the device, and that changes everything about how teams build it.
Apple Watch Series 4 got FDA clearance for ECG functionality in 2018 and effectively opened a door. Akili Interactive got clearance for EndeavorRx, a video game for kids with ADHD that's legally classified as a prescription digital therapeutic. Dexcom and Abbott built full software stacks for their continuous glucose monitors, complete with hypoglycemia prediction. In those products, the algorithm is the product.
For a solid reference on how large vendors approach compliant healthcare IT at the enterprise level, DXC Technology's healthcare practice is worth a look: https://dxc.com/industries/healthcare-solutions.
What's Being Tested Right Now
Neuralink started its first human brain–computer interface trials back in 2024. The software running that implant is probably one of the toughest things in engineering right now.
Synchron took a less invasive route — its Stentrode doesn’t need brain surgery and already has real-world data from ALS patients.
Philips and Medtronic are building cloud systems to stream data from implants in real time, while Siemens is baking AI directly into CT and MRI scanners to spot pathologies automatically.
All these projects face the same pain points: insanely complex software and the maze of global medical regulations.
SiMD vs SaMD: The Line That Matters
Here’s the line that matters.
-
SiMD (software in a medical device) is the code running inside hardware: the firmware in a defibrillator, the control logic in a pump, the UI on an ultrasound. Hardware and software are one piece.
-
SaMD, on the other hand, is software that is the medical device: an app scanning skin photos, an algorithm detecting diabetic retinopathy, a support system helping doctors decide.
That distinction shapes everything: your regulatory path, your documentation, your testing. It’s usually the first real discussion any medical device software team has.
Medical Device Software Standards: The Regulatory Foundation
You don't start software development for medical devices by writing code. You start by figuring out which standards apply — because these aren't optional guidelines. They're the conditions for market approval.
Key Standards at a Glance
|
Standard |
Scope |
What It Covers |
|
IEC 62304 |
Global |
Software lifecycle, safety classes A/B/C |
|
ISO 13485 |
Global |
Quality management system |
|
ISO 14971 |
Global |
Risk management across the lifecycle |
|
IEC 62366 |
Global |
Usability engineering |
|
FDA 21 CFR Part 820 |
US |
Manufacturer quality system requirements |
|
EU MDR 2017/745 |
EU |
Registration and post-market surveillance |
|
IMDRF SaMD Framework |
Global |
SaMD classification and regulation |
Knowing these standards isn't the same as understanding how they interact. Requirements sometimes conflict across jurisdictions, and that's where teams without regulatory experience tend to get caught.
Safety Classes Under IEC 62304
The safety class is one of the earliest decisions in a project and one of the most consequential.
Class A covers software where failure can't cause injury. Minimal documentation required. Class B applies when failure could cause non-lethal injury: full SDLC with traceability. Class C is for software where failure could cause death: comprehensive testing down to unit level, maximum documentation requirements.
Most clinically relevant software ends up in Class B or C. Teams that assume Class A and plan to revisit it later almost always regret it.
Medical Device Software Engineering: From Concept to Certification
Medical device software engineering isn't Agile with extra paperwork. Every artifact needs to be verified, traceable, and reproducible. A gap in any of those three is grounds for regulatory rejection.
Step 1: Planning and Requirements
The Software Development Plan (SDP) comes first. It defines the methodology, tools, configuration management, and V&V procedures. Without it, the rest of the documentation doesn't have regulatory standing.
The Software Requirements Specification (SRS) is what everything else gets built on. Requirements need to be measurable and tied to specific tests. "The system shall respond quickly" isn't a requirement. "Response time shall not exceed 200ms under normal load" is.
Vague requirements are design errors. They just don't show up until testing, which is the worst time to find them.
Step 2: Software Design for Medical Devices
When it comes to software development for medical devices, design work happens on two layers.
The architecture is the big map (what the system’s made of, how the pieces talk to each other, and what sits between them. This is also where teams wrangle with SOUP, third‑party code and libraries whose origins aren’t always clear. Each one needs a quick risk review and version history so it doesn’t turn into a hidden liability later.
Then there’s the detailed design, the low‑level stuff: functions, data handling, edge cases, the logic that keeps the system stable when something breaks. Every choice ends up in the Design History File (DHF) — basically the project’s black box that shows how the team got from requirements to final design, and why they made those trade‑offs along the way.
Step 3: Implementation
The most common coding standard for embedded medical systems is MISRA C/C++ — the same ruleset NASA uses for flight software. Static analysis tools like Polyspace, Coverity, or LDRA catch defects without running the code. Code reviews need traceability to specific SRS requirements.
And documentation written after the fact always shows. The dates don't match. Regulators notice.
Step 4: Verification and Validation
V&V are different things. Verification asks: did we build the product correctly? Validation asks: did we build the correct product? Both are required.
Testing covers:
-
Unit testing, scaled to the IEC 62304 safety class
-
Integration testing, checking interactions between modules
-
System testing, in simulated or real clinical conditions
-
User Acceptance Testing, with actual clinical staff involved
That last one is often underestimated. UAT consistently surfaces gaps between what developers thought was intuitive and how a nurse in an ICU actually works under pressure.
Medical Device Software Design: Building for Safety
Good medical device software design isn't just clean architecture. It's designing for what happens when things go wrong.
Fail-safe defaults mean the system moves to a defined safe state on failure — a dosing pump stops rather than continues. Defense in depth adds multiple independent verification layers so one failure doesn't take down the system. Deterministic behavior is non-negotiable in real-time systems: priority inversion in an RTOS can be lethal in a medical context. Least privilege keeps each module limited to what it actually needs.
One area teams consistently underestimate: SOUP. Every open-source library in a medical device project needs a documented version, source, CVE check, and update process. Teams coming from standard development backgrounds usually hit this wall first, and it's a significant adjustment.
Risk Management: ISO 14971 in Practice
Risk management isn't a document you fill out for the regulator. It starts at concept and continues after the product ships.
The ISO 14971 process moves through hazard identification, probability-times-severity assessment, design-level controls, residual risk evaluation, and post-market surveillance. That last step often gets treated as a formality. It isn't.
For software, the specific risks include race conditions in real-time systems, sensor data mishandling, cybersecurity vulnerabilities, and false positives or negatives from algorithms. That last category matters most for AI-based systems: a confident but wrong algorithm causes harm just as effectively as a hardware failure, and regulators are increasingly aware of that.
For a practical look at how these challenges play out in actual clinical settings, this breakdown of EMR software challenges for medical practices shows the gap between technical decisions and real clinical needs clearly.
Agile in a Medical Context
The FDA has accommodated iterative development since 2012. Agile isn't the problem. The definition of "done" is.
In standard Agile, a sprint closes with working software. In medical development, it closes with verified, documented results that are fully traceable to requirements. Sprints, user stories, CI/CD, automated testing — all of that adapts. Full traceability, formal design reviews, and complete artifact documentation don't.
Teams that bring their standard Agile setup into a medical project without adjusting usually discover the problem at audit. That's the most expensive place to find it.
Common Mistakes
Most certification failures are process failures, not technical ones.
Misclassifying the safety class early means insufficient testing and a rejection after months of work. Broken traceability means requirements and tests exist but the documented link between them doesn't. Undocumented SOUP is one of the most frequent FDA information request triggers. And retrospective documentation always shows — the dates don't line up.
These mistakes share one cause: cutting corners at the start and paying for it much later.
Best Practices: What Actually Works
Teams that move through certification without years of rework tend to have a few habits in common.
Risk-based testing applies effort proportional to risk. A dosing control module gets far more scrutiny than a reporting screen. A living traceability matrix stays updated alongside the code, not assembled before a deadline. Early design reviews catch architectural problems on paper, where fixing them is cheap. Automated regression ensures every post-market change clears a full verification cycle without manual intervention.
The teams that move fastest treat compliance as engineering, not administration. That's usually where the difference shows up.
Choosing a Development Partner
Medical software development is rarely done entirely in-house. When evaluating a partner, look for evidence rather than claims: how many products have actually cleared the FDA or received CE marking? Is there an ISO 13485-certified QMS in place? Does the team understand the specific pathway for your device class?
Technical expertise in embedded firmware doesn't automatically transfer to cloud-based SaMD with a machine learning component. Those are genuinely different disciplines.
For broader context on how technology investment shapes healthcare delivery, this piece on IT's role in improving healthcare services is worth the read.
What's Coming
IEC 62304 is being updated to address AI/ML, SaMD, and cloud services. FDA guidance on AI/ML-based SaMD gets revised regularly. Mandatory SBOM requirements are expanding across both US and EU markets. And regulations around continuously learning systems — software that updates its own models post-launch — remain unresolved. How that question gets answered will set the rules for the next generation of medical AI.
Takeaway
Medical device software development carries a level of accountability that most software fields don't come close to. Every architectural decision, every third-party dependency, every test case matters — not because a regulator requires it, but because someone is depending on it working correctly.
Understanding medical device software standards, applying sound software design for medical devices principles, and getting the process right aren't overhead. They're the conditions under which a product reaches the market at all. And in this field, the cost of getting it wrong isn't a rollback. It's something else entirely.
Reviewed by