I was having dinner with friends the other night and we got onto the topic of Nurse Focus and how I had been writing a lot of content covering the good old stethoscope.
None of these friends work in healthcare I have to add, so the topic was actually surprisingly interesting to them. It also ended on a path that I had never really considered before.
Anyway, one of them started reading out a recent post on the market for Intelligent Stethoscopes. This then led to developments in AI and the medical industry; the ‘when will robots replace doctors’ question, briefly coming up to my dismay.
I managed to bring this back around to more realistic territory by discussing a recent break-through about a smart stethoscope that can accurately detect pneumonia.
Essentially, researchers at Johns Hopkins and the startup Sonavi Labs have been working to re-engineer the humble stethoscope by adding AI-based learning algorithms. This technology is designed to automatically detect lung sound abnormalities, including pneumonia.
They have also added active noise cancellation systems in order to enhance sound detection capabilities, (you can read an in depth article about that here).
As I explained to my friends at the dinner party, the Johns Hopkins smart stethoscope has been successful in identify symptoms of pneumonia, because of the installed machine learning technology.
By comparing vast datasets (i.e 1000’s of sound recordings of healthy lungs against those with conditions), the algorithm is able to distinguish between patients with pneumonia and those without to an alarming degree of accuracy.
All very interesting, although not the least bit surprising if you follow current trends in the medical tech industry as I do.
However, one friend (a lawyer) voiced a concern that I hadn’t considered before… the legal implications of such technology.
Who is at fault if the diagnosis is wrong?
This simple question started a conversation that lasted at least an hour.
If the machine is making the diagnosis, who is at fault if something goes wrong?
Human error would be very difficult to ascertain in such circumstances. Could one pinpoint the source of bad data or a mismatch within the algorithm leading to an incorrect diagnosis? Could fault be found in the application of the technology by the caregiver?
The lawyer in our group raised the practical issues with this from a professional standpoint; how would his colleagues at the medical malpractice bar deal with complications from machine learning technology.
There certainly isn’t any legal precedence on the incorrect use of smart medical technology as yet. That will inevitably change, however.
Where machines assist a medical professional in making a diagnosis, while also reporting data that a provider relies on to make the diagnosis; an entire minefield of potential regulatory issues enters the fray.
In short, the answer as to where the fault may lay if and when problems arise using smart medical technology will become increasingly blurred.
Clearly, the use of algorithms and machine learning has the potential to revolutionize the healthcare industry for the better. However, we are currently at the dawn of this seismic shift, we need to do all that we can to understand the legal implications of what these changes will bring.
- John Hopkins – A Smart Stethoscope Puts AI in Medics’ Ears
- Smart Stethoscopes Market Growth Analysis, Share, Demand By Regions, Types And Analysis Of Key Players Research Forecasts To 2023
- Hackaday – Stethoscopes, Electronics and AI
- The Stethoscope Gets Smart: Engineers from Johns Hopkins are giving the humble stethoscope an AI upgrade