SyTrue is pleased to announce the appointment of Cynthia Nustad to our Advisory Board. Ms. Nustad is recognized for her success as an innovative technology executive and a business leader in healthcare. With over two decades of executive management in healthcare and her success with creating and executing company growth strategies, she is a wonderful addition to SyTrue’s Advisory board.

SyTrue is an innovative healthcare technology company using NLP: Natural Language Processing to help solve access and utilization of Electronic HealthData”. Accessing EMH/EHR data for use across many needs at a Health Plans and service provider has been a hard problem to tackle until now. SyTrue helps unleash medical record content in a unique. I am excited to join this remarkable company” said Cynthia Nustad.

Ms. Nustad recently served as the EVP, Chief Strategy Officer of HMS Holdings (Nasdaq: HMSY) where she oversaw the company’s strategy, roadmap and integration of new product and technology capabilities. Nustad was also instrumental in directing the evolution and growth of corporate technology, data & analytics, software and solutions. She has spearheaded the creation of a new business verticals, creating aligned operations and integrated acquisitions with internal innovation. Her forte: the investment thesis of acquisitions, business transformation, creating scale with technology, and cultural integration, all with a special emphasis on creating long-term shareholder value.

“We are thrilled to have Cynthia on our team. She brings a tremendous amount of knowledge in Total Population Management and Payment Integrity. Cynthia’s role is critical as SyTrue roles out is SyAudit™ and SyHealth™ solutions to Health Plans and Service Providers.” said Kyle Silvestro, CEO

Previously, as EVP and CIO at HMS, Nustad led the technology, cybersecurity and product innovation functions. This helped establish HMS as a healthcare technology company and created the foundation to propel its future growth. She has led large operations with both domestic and international talent. Earlier in her career she served as a CTO, had P&L responsibility, and led product development. Serving Fortune 500, mid-size and startup companies, she has also held executive leadership roles at Cambia Health Solutions and WellPoint/Anthem (NYSE: ANTM) to name a few.

Cynthia holds an MBA from the University of Oregon, an MPH and BA from UCLA. She currently serves as a Board Advisor for Instamed, a digital banking and payments network for healthcare. She was previously a Board Member for Integriguard and a not-for-profit organization, Outside In.

WANTED: Smart “Hacks” to Boost Healthcare Data Quality

WITH a RISING TIDE of DATA, GROWING DATA ISSUES

There’s a rising tide of healthcare data. It lifts many hopes for better healthcare, but also surfaces one troubling issue: reliability of data.

Just how confident are you of the reliability of your data?

See of Data

As a healthcare provider, you already know that data permeate your office workload. This impacts a critical feature of your operations: your workflow, a process you probably evolved over many years. Suddenly, you’re now doing “refreshes” to accommodate the new data volumes you’re seeing. “We’ve always done it this way” – that just doesn’t cut it any longer.

Time was when you had dictation, writing and paper records. You now have many data input options (EMRs, voice-enabled documentation and more).

So volume keeps growing, tools get more complex. Bigger yet are the issues around understanding your data, some not really obvious. For physicians, the EMR demands careful checks of patient records, new ways to capture care offered elsewhere, new diagnostic tools and ways for updating your patient’s condition — plus a bigger focus on “quality assurance.” Your “inputs” now need accuracy checks. It also means you’re the new data entry analyst on the block, and you’re burdened with an extra tall order for vigilance.

Now, how reliable are your data?

Reliable data

Example: At the point of care, as ICD codes get assigned to cases, there are some common errors, but their rates may top the 20% level (and higher still in some studies that have carefully assessed the data error issue). [Please see:  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1361216/] The inaccuracies may come from patient behavior, the record trail itself, the provider or physician. But errors do seep into the record: A physician uses a typical synonym to label ‘‘stroke’’: She can choose “cerebrovascular accident,” “cerebral occlusion,” “cerebral infarction” or “apoplexy.” Which is right?

Even if we correct for study differences among error-rate studies of clinical data, we know the error rates are unacceptable. The complexity of a case, provider inexperience, patients lacking skill in discussing symptoms, or the reluctance to listen to a patient’s view of her condition – all matter. It’s not uncommon to hear that even professional nurses may not be taken seriously as they describe their own condition to an emergency room provider.

AN EXPERIMENT in ERROR TOLERANCE

If it seems special pleading to belabor the issues around data, picture:

Your child, driven from athletic field to an ER, being diagnosed for a traumatic hit to the head. Will all diagnoses work, and will you trust them?

Your aging parent, living with multiple chronic conditions, uncertain about the new pains in her body. Where in her treatment steps will you overlook “understandable” error?

At moments of truth like these, we lose patience and tolerance for any (let alone “understandable”) errors. But medicine is still catching up with us. Clinical errors still can, and do create chains of miscues that prove fatal.

The quiet fact: While errors continue to crop up – at stubborn levels and rates — we know how to minimize them at the point of care. We know the “hacks” needed to produce much better data quality, and how to use those tools. At SyTrue, we use a comprehensive data platform so that diagnoses are done right, coded right, can be queried in “natural language” terms and can yield C-CDA care records that patients will take anywhere.

Nonetheless, data errors continue to get “spiraled” into the medical record and analytics trail. So when it’s time for analytics, there’s only so much that can be extracted as the “true record” of a patient meeting. By then, it’s too late. By that point, an inaccurate diagnosis and recording errors could well be compounded by the medications and treatment used.

Simple human error shadows many issues even with data missing from the picture. In the US, remember, we still see more than 2,300 annual wrong-site, wrong-patient operations (about 46 per week). These may be “understandable”—but acceptable? Add data to this kind of picture, and it’s a volatile mix.

The healthcare system is beginning to tackle the healthcare data issue with some pace: But put very simply, the data remain unreliable. We know US medicine has many core issues, so while data quality gets mention, it doesn’t attract follow-through. Healthcare, meanwhile, correctly still targets the triple aim (better care, better quality, lower cost). It responds to practical concerns: Expensive drugs (Sovaldi) or new policies (ONC on interoperability). But data quality issues live on and may well escalate.

FOR INTEROPERABILITY to WORK, FIX the DATA ISSUES

Healthcare’s “Holy Grail” is interoperability. It’s been missing in action while getting lots of notice in planning. But with the ONC’s new urgency to achieve interoperability by 2017 – sooner than envisioned in 2013 – we’re seeing a tough road ahead. It may mean “mountain climbing” over many hills of unreliable data, just to get to a base camp near the top.

ONC former Chief Science Officer, Douglas Fridsma, once quipped in 2013 that the US standard of interoperability is a “modem and a fax machine.”

What’s next: Our many proprietary US clinical documentation systems, each with data error levels that may not be thoroughly understood, may be asked to lead this vanguard to the “interoperability” summit. Let’s get the data issue right — before the path begins to look like a “bridge too far.” Why not fix the reliability of data and help all patients get better care?

You see two prominent terms in many healthcare headlines these days: “big data” and “smart data.”

They’re familiar, yet we don’t quite know what they mean. Let’s parse them to answer one question: How do we use data optimally and “smartly” to help providers and patients in healthcare settings?

DEFINING POINTS
Is all data easily “convertible” into “smart data”? Yes and no. Smart data requires “extrusion” and special steps, as we’ll see below.

Take the Apple Watch, which offers you lots of data. Does it “do” smart data? As a tool (debuting in 2015), it quizzes the wearer, monitors heart rate and steps walked. Via sensors, it tracks pulse and blood pressure. With other apps it may monitor blood glucose levels in diabetic kids – which puts it on a path to providing smart data.

A useful watch, big potential, but no “smart data device.” Only when your GP can use your watch to track critical “diagnostics” and help you reshape your health behavior with many “data streams” – that’s when we get smart data. Healthy behavior is key (forget for now how we get patients engaged in it). This won’t “score” on the smart data scale.

Google Glasses in the ER
Google Glasses in the ER

A better candidate: Google Glass, now being piloted in major hospitals, and producing good results. As Beth-Israel Deaconess CIO John Halamka reported earlier, it’s in BID’s Emergency Department, and excels in data gathering at time-critical moments. Example: A patient with a profound brain bleed – on blood thinner plus having “some” allergy to blood pressure medications – needs immediate attention. The Google Glass Wearable Intelligence data-feed lets physicians “see” the patient’s medication regimen quickly, offers his allergy problem list yet lets them keep eye-contact and act in minutes. No logging-in to EHR records or extra phone calls.

So it “scores” on the “smart data” spectrum. It accesses and confirms vital clinical information, which drives crucial decisions in that ER moment. As CIO John Halamka sums it up in his blog: The tool offers “contextually-relevant data and decision support wisdom” – a fine starting point for characterizing smart data.

So Google Glass’s is a clear advance as “smart data” — allowing quick, richly-sourced decision making in critical moments. But there’s more to this picture.

“BIG DATA” versus “SMART DATA”
While media stories and marketing of big data continue, it’s clear we’ve entered a new phase, a time when “smart data” supersedes “big data” in healthcare.

Figure 1 – GARTNER on the “BIG DATA” Hype Cycle (July 2013): “Big data” has peaked in “inflated expectations; it’s ready to plunge into the “trough of disillusionment.”

hype

BIG DATA VOLUMES, MANY DATA SOURCES
Big data is everywhere, but “smart data” is harder to find. We need it because it provides vital “wisdom” for complex decisionmaking – in population health, in ways to enhance radiology results, in improving revenue cycle management, operations and workflow.

With smart data, we flex the right mental muscles and tackle tough questions. Example: Which hospital patient, due for discharge, is a candidate for readmission in 30 days? “Smart data” can answer this, saving the hospital significant ACA penalties in the process.

______________________________________________________________

WHERE SMART DATA CAN HELP …

Penalties for (within-30-day) hospital readmissions (by ACA law):

Average 2014 cost per penalized hospital: $102,022.

______________________________________________________________

We have new analytic tools to process all data, but many are ill-suited to healthcare’s needs. The volume of data alone is an issue; there’s also the variety of data to “extrude.” Google’s Eric Schmidt tells us that from early history onward, humans created (in total) 5 exabytes of data. But, he adds, “We produce five exabytes every two days [now]…and the pace is accelerating.” Even Google’s storage systems are challenged: They have 3 separate levels of complete backup – every single day.

So as data volumes grow exponentially, we need more “smart data” – context-rich, “decision”-driving information to help healthcare providers answer big questions.

But “processing” the expanding volume of data, and extruding the right “decision driving” wisdom – smart data — from it is the crucial hurdle.

Consider the many data sources you need to convert “regular” data into “smart data wisdom.” On average, all hospital’s may have 180 systems either monitoring or collecting information – some not linked to others.

If you’re a data scientist looking to extrude the smart data, you face Emergency Department records, physician lists and orders (admitting, attending), observation records, radiologist/lab reports, EKG reports, consent forms, progress notes, discharge notes, discharge summaries, continuity of care records, pathology reports, referral notes, registration forms, nursing and physician notes, EHR records (just a partial list of sources). These arrive in different codes – at “source data” level – and formats.

Interoperability (usefulness) of records from elsewhere is another layer of issues. And you have “unstructured” records (notes, transcriptions) offering extra problems.

This is a “dirty” data picture — until it is Parsed, Cleaned and Refined. That means identifying the data type, processing it to normalize it (despite the varied formats and native coding used). For smart data purposes, you need to make the information to be plain language searchable (natural language processing steps), smart in spotting mistakes, and “decision”-ready for the many providers hungry for it. It’s a treasure trove, but at the outset, it’s just a big bundle of “ready to clean” bits.

STEPS to a SOLUTION
At SyTrue, we saw some years ago that data variety, data size and interoperability would be dilemmas years ago. We’ve designed our data “extrusion” process so that we can take any data and convert it into a consolidated CDA – making it searchable and usable (via semantic interoperability). Until clinical data is encoded at the point of care, we won’t have true interoperability through non-“smart” systems such as EMRs and EHRs.

For smart data purposes, EHR lack the clinical intelligence wisdom that gives healthcare providers deep information on patients and patient groups. SyTrue’s smart data platform, in addition, actually “sits” on top of an EMR/EHR, but below the electronic health information exchange (HIE) a provider or hospital uses. Our platform offers interoperability with other systems and smart, “decision ready wisdom” on demand.

NEXT in OUR BLOG SERIES: EXTRUDING SMART DATA

TO ANSWER MAJOR NEW HEALTHCARE QUESTIONS

Skip to content