A Crisis of Data Confidence

Blog /A-Crisis-of-Data-Confidence

In the financial services industry, data has become the lynchpin of success. Financial data is exchanged with customers, business partners, exchanges, and trading partners. And all parties expect it to happen immediately, seamlessly, and flawlessly.

But if a firm makes a mistake in any part of the data lifecycle (gathering, storing, processing, sharing, or analyzing), it faces customer and business partner ire (or, worse, flight), brand-reputation damage, higher operational costs, and even fines.

Indeed, the Data Quality Institute estimates that data quality problems cost US businesses more than $600 billion per year, and financial institutions are by no means exempt. In fact, the industry’s massive investment in advanced data management and analytics has produced more of a blind trust in what goes into and comes out of these tools than a consistent track record of data quality.

And the truth is, even a focus on data quality is no longer enough for regulators, auditors, investors, and other stakeholders. Organizations need a combination of data quality and data confidence.

In our experience, the majority of financial services institutions are not ready to deliver on both data quality and data confidence. Most are running behind or in-place in the race for data leadership. Many simply react to issues as they arise, and although some are implementing limited governance and delivering data, they still are not truly in control of their data.

Ensuring data quality

This is the first step in moving from uncertainty to confidence, one that’s best approached by grouping quality tests into several categories.

Completeness. One cannot perform more complicated data quality tests if the data isn’t there to test. Completeness testing can be applied to a narrow set of data, such as a department database, or to enterprise-level systems.

Validity. Validity tests look at the element itself. This includes tests for type, range, reasonableness, as well as validating against lists of valid values. Like completeness, these tests can be accomplished without multiple interfaces.

Integrity. Integrity tests look at an element’s relationship to other elements from two perspectives. Consistency looks across multiple instances of the same element. If I have a deposit account, a credit card, and an auto loan with the same bank, and I change my address, I expect to tell the bank about the change once and see it reflected in all my accounts. Context looks at an element’s relationship to other elements to confirm the context they establish make sense. For example, if my address shows San Francisco, California, the country should automatically register as US. Context testing also looks at temporal context to ensure the elements being combined and aggregated are built from data representative of a consistent period.

Accuracy. It’s important to realize that accuracy depends on more than just the quality of the data. For example, someone might apply for a credit card and provide data that is complete and valid—but is a stolen identity. Banks must  know their customers and verify that the information customers provide is accurate and appropriate.

Reaching a state of confidence

You’d think that by the time an organization established complete, valid, accurate data with high integrity, its goal would be achieved. But data quality is not enough; organizations must also overcome the barriers of confirmation bias and quality erosion.

Confirmation bias is the tendency to seek out information that confirms a person’s beliefs or ideals. In other words, once customers, auditors, or managers believe the data is bad, it’s hard to get them to change their minds. Many senior executives started their careers when computers merely crunched accounting data. Back then, they relied on their instincts and, for many, that approach served them well. Countless presentations to management open with an analyst saying, “I know you believe the situation to be X, but the data suggests that it is in fact Y” and end with “Thanks, but no thanks.” Some beliefs are driven by first-hand dealings with faulty data. The memory of bad decisions caused by bad data drives managers to resist logic and fall back on instinct.

In addition, the fact that data quality was established at a certain point is not sufficient reason to trust that those levels will continue. Turnover of key personnel, changing business priorities, and the coming and going of applications can inhibit sustained data quality.

Once quality is established, the challenge is to convince leaders, auditors, regulators, investors, and other stakeholders that data quality has not only been established, but that the level is sustainable and reliable. Organizations that embrace the data revolution and work to build confidence will become industry leaders, transforming customers into loyal followers.

Post Date: 10/16/2015

default blog image Michael Goodman

About the author

VIEW ALL POSTS
EXPLORE OUR BLOGS