Tuesday, November 26, 2024

October 2018: New Fiscal Year, New Fiscal You!

In this month’s article, Mike Farahbakhshian celebrates the new Fiscal Year with updates on all the various predictions, gripes and rants from the last year. Let’s see how the end of the Health IT world as we know it is progressing. Time to read: 8 minutes. Suggested drink pairing: Iugiter Priorat DOC.

Happy October, Meaningless Useketeers! More importantly, happy Fiscal New Year! My calendar is thick with Fiscal New Year parties. I hope you set out a plate of proposals and labor rates on Sept 30th and that Fiscal Claus gave you lots of contracts! (Or, if you were bad, cure notices in your stocking.)

At one of these Fiscal New Year parties, an industry giant called me “the BD gadfly.” Thanks… I guess? Isn’t a gadfly a bad thing? I’ll crowdsource the answer:

Mike was called a Gadfly. He should feel: (Check only one)
Honored
Insulted
Either hungry or flatulent, I’m not sure, just give it a few minutes and let it sort itself out

Being a gadfly, I made a bunch of bold predictions in FY18. Let’s see if any of them came true.

Hey Mike, Remember When You Were Ranting About AI and How It Will Destroy the World?

I can confidently predict that AIs will not destroy the world in FY19. Their image processing isn’t there yet. However, they’ve been of great help in assisted histopathology.  A recent study published in the American Journal of Surgical Pathology used the Google Inception V3 AI to assist in the identification of metastatic breast cancer. I say “assist” because image recognition performed without human oversight results in lots of false positives. This is what generative adversarial networks are trying to fix, although they aren’t ready yet either.

Created using a generative adversarial network. Source: deepdreamgenerator.com

The AI, using TensorFlow, is not an adversarial AI. It is a  traditional machine learning AI, trained by people. The AI, using an algorithm called LYNA, was designed to identify clean tissue, individual tumor cells (under 0.2mm), micrometastases (under 2mm) and macrometastases (over 2mm).

The results of the study are interesting. The AI wasn’t up to identifying images solo, yet it was a significant aid in helping humans classify these images. Mean time to identify micrometastases was nearly halved, from 117 to 61 seconds. Another study on using AI to assist lawyers with spotting problems in nondisclosure agreements yielded similar conclusions. I’ll let the results speak for themselves.

Source: http:///www.lawgeex.com

It seems like we are using traditional AI for roles analogous to those we use for dogs: A second, sharper set of eyes to spot things of interest, ears to pick up events of note and the ability to track down and retrieve specific targets with great speed and effectiveness. As long as we keep AI this way we should be good.

This. You want this.

Hey Mike, Remember When You Were Ranting About Chatbots Replacing Service Personnel and How it Will Destroy the World?

This one’s still going to destroy the world within our lifetimes. A study by the Pew Research center shows less than half of Americans feel they are able to identify a social media chatbot and that 80 percent of them feel bots are used for bad purposes. Only 53 percent feel that it is appropriate to use bots for customer service applications (including Healthcare).

Yet here we are! With an increasingly connected population, strapped Federal resources and access to cheap and easy computing, chat bots will be the Government’s first line of defense in serving constituents. This is especially important for the Department of Veterans Affairs, whose large constituency and broad range of services means that speed and automation are key. And so it is that we will use chatbots for everything. Chatbots for answering basic health and lifestyle questions! Chat bots for loan applications! Chatbots are cheap! Chatbots are reliable! Chatbots save time and money! Everyone will cite one claim which, in some way shape or form, traces to an Accenture white paper about chatbots that claims 80 percent of chat sessions are resolved by a chatbot. What does that mean? Are they accepting a wide definition of “resolved” – If a chatbot pops up on a random site and you ignore it, is there an algorithm that says “if there is no input within five minutes, consider the issue resolved?” Where is the primary source information? (Seriously, Accenture guys reading this – help a brother out here.)

[citation needed]

For Healthcare purposes, chatbots will require – much like image recognition – human intervention at a certain level. Facebook’s answer to Siri and Cortana is “M,” which uses a team of people to fill in where the app fails. I would strongly advise that any chatbot solution used in Healthcare, especially for military and Veterans, bias toward engaging actual people as supervisory. This is especially important given chatbots’ dark side: Any AI can have racial and gender biases. A biased AI, especially one used in patient care, can reach different assumptions and provide different levels of service based on your ethnicity or sex. No one wants to be on the short end of that stick. Once again, we need oversight in AIs. As this article so brilliantly put it:


So much for the idea that bots will be taking over human jobs. Once we have AIs doing work for us, we’ll need to invent new jobs for humans who are testing the AIs’ results for accuracy and prejudice. Even when chatbots get incredibly sophisticated, they are still going to be trained on human language. And since bias is built into language, humans will still be necessary as decision-makers.


I’m apt to agree. Liberal arts majors, prepare for your day in the sun! FY19 prediction for college freshmen: Declare as ethics or English (or Mandarin, or even Russian at the rate we are going lately) majors! If AI is our data service dog, then all the smooshy stuff is the obedience training. We are awake to the need for ethics officers in Silicon Valley culture. Let’s make sure the robots behave.

Hey Mike, Remember When You Were Ranting About Hackers Taking Over Medical Devices and How it Would Destroy the World?

World destruction is underway. Medtronic, a medical device manufacturer, recently had to shut down portions of their device update network. This device would provide updated firmware downloads for a device, called a “Programmer.” A Programmer is used to tune pacemakers. It turns out that the software patching process was itself vulnerable, meaning that doctors could download malicious firmware that could cause patient safety nightmares. With no way to verify the code is clean, using a Programmer to tune the pacemaker is too risky.

Medtronic did the right thing attempting to contain the problem, from a cybersecurity perspective. There is no way to win from a patient safety perspective. An open network would cause the risk of hackers compromising Programmers and causing patient safety issues. A closed network means that tuning a pacemaker becomes impossible beyond certain firmware revisions, and other bugs will be harder to fix. Except, Medtronic did the wrong thing attempting to contain the problem. Ha ha! As this article so succinctly puts it:


Malicious updates could be pushed to Medtronic devices by hackers intercepting and tampering with the equipment’s internet connections – the machines would not verify they were actually downloading legit Medtronic firmware – and so the biz has cut them off.


Okay but … if a man-in-the-middle attack is compromising your devices, cutting off the far endpoint isn’t going to do anything. If anything, that just makes it easier for someone to spoof that they are Medtronic’s patch network. WHOOPS!

… Look, there’s not a lot that can be done when your medical device is too trusting. Better to use security by design and include Public Key Infrastructure (PKI) signed software packages which devices will verify before installing. You know, like every Linux distribution has done for over 20 years.

This is an area where AI can be of great use: Analyzing widely distributed medical device IoT-style updates. Patterns of use can be determined and strange network traffic or particularly sensitive firmware downloads can be flagged for human intervention. Make your AI – your data service dog – work for you.

He’s got you covered.

Hey Mike, Remember When You Were Ranting About Apps and How They Will Destroy Us All?

I sure do! Data harvesting from apps is out of control, even when you disable services, even when you try to uninstall them, and these data sets can be used by criminals to execute massive-scale fraud.

No, really.

And then Google has the temerity to start a program, called – and I quote, “Be Internet Awesome,” to “teach children how to be safe online.” Safe from who? State-sponsored election hackers? Big Brother using Big Data to profile potential “troublemakers?” Organized crime using botnet Ponzi schemes? Sounds like the fox teaching chicks on the principles of henhouse safety. Maybe I’m just not Internet Awesome to understand.

Gosh, I sound like Andy Rooney: “Didja ever notice that technocratic oligopolies were tracking your every action, with no practical alternative other than being a tinfoil-hat hermit, while crumbling social discourse, and telling your children what the definition of safe is?”

Props to the O.G. (Original Gadfly), may you be eternally turning in your grave.

Hey Mike, Remember When You Were Ranting About Genomics?

Trick question. I haven’t written that article yet. Although I am amused that there is a shortage of genetic counselors to actually interpret all the raw genomics data being sampled. Anytime there’s a large amount of data being produced by so many people and understandable by so few, I get suspicious. Who is profiting from it? Why is something so commonly unintelligible so valuable to so few?

Follow the money, people. What does Big Genomics not want you to know?

I know, I sound like a crackpot, but people don’t waste petaflops of computation and petabytes of data storage on something there’s no good use for. Big pharma, weapons manufacturers, civil defense firms, law enforcement, insurance, organized crime, have big interests in hoarding proprietary genomic knowledge.

Having more knowledgeable people in the form of counselors commoditizes genomics instead of locking it in proprietary siloes. DNA is the community property of every living creature on earth. We should know what people know about it, what they are doing with it, and this should never belong in the hands of a few.

FY19 should be the year you advise your trusted partners and Government customers to ensure open access to one’s own genomic information and regulation of genomic intellectual property. Just sayin’.

Should Auld Acquaintance Be Forgot, and Other Lyrics

So, there you have it. The world didn’t end in FY18, but there’s a lot of FY19 ahead of us. With midterm election season upon us, expect to see Healthcare used as a tool, weapon, and scapegoat in Congressional and state elections nationwide. Expect to hear promises about technologies that will make the world a better place if you only just vote for me!

Well, I’m not running for office, but I do advise you to be informed on the issues. AI, genomics and medical devices already affect our lives and the effect is only going to grow. The rules we set today govern how these tools and techniques can invade our way of life.

Gadfly out.

[drops mic]

And all the girls say I’m pretty fly (for a tech guy).

[related-post]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

FedHealthIT Xtra – Find Out More!

Recent News

Don’t Miss A Thing

FORUM Editor
FORUM Editorhttps://insights.govforum.io
Content Analyst for FORUM and Author on the Daily Take Newsletter for G2Xchange Health and FedCiv.

Subscribe to our mailing list

* indicates required