53 views 40 secs 0 comments

Who’s going to protect patients from automated “medical note-taking “hallucinations” & voice surveillance false positives?

The FT is reporting on the boom in AI “Medical note-taking ‘scribes'”

The article starts by focusing on how big the market is, the names of the players, and the reasons why Medical Note Taking apps are revolutionary.

However there are criticisms later in the piece – here is the headline and one of the paragraphs.

Concerns

Here are the legitimate concerns

I remember listening to Joseph Turow’s Voice Catchers a couple of years ago – in it he said that companies would be using signals in speakers’s voices to judge their mood which would then affect how they are ultimately treated.

Now we’ve completed the circle and it’s being reported that voice surveillance is likely being used in treating medical patients.

On the surface it’s to make sure that what patients say is being truly responded to.

But what happens when things go wrong?

Will patients be misdiagnosed, wrongly medicated, and have their own ailments misrepresented.

It’s a bit like having a self driving car but instead of automated driving, we’re contemplating automated diagnosis, medical dispensing and even automatic operations – so when and where should we draw the line?