Machine learning models are becoming an increasingly integral part of the global healthcare infrastructure. They have led to improvements in computer vision, predictive genomics, palliative care, among other fields, and often their performance has turned out to be better than the human experts. But very few people in the industry are aware of the new and unique security threats that come with these new algorithms.

In 2003, Nissim and Dinur published a paper titled “Revealing information while preserving privacy” where they revealed two surprising facts:

“It is impossible to publish information from a private statistical database without revealing some amount of private information.”

“The entire database can be revealed by publishing the results of a surprisingly small number of queries.”

The implications were huge.