Data has always been an asset as valuable as cash in the banking industry. Technology and the emergence of big data have made it easier for financial institutions to use the volumes of information at their disposal in new ways.
Financial institutions use data to drive a number of key decision-making processes, from who to merge with and acquire, down to how to market their branches to new customers. In fact, it seems like there’s nothing they can’t do as long as banks have the right, “true” data in hand.
WHAT OSCAR WILDE KNEW THAT BANKS DON'T
Oscar Wilde famously said, “The truth is rarely pure and never simple.” Oscar wouldn’t have known big data from a big biscuit, but he perfectly summarized the core problem in relying on data as the absolute truth that drives your decision-making. Dazzled by technology, people have come to believe that if there’s data that says something it must be of forensic quality — it must be “the truth!”
That mindset stems from our use of language. In the English language, the subject of a sentence is a concrete noun that only changes after a verb is applied to it. Still, the truth of that thing remains the same. We assume data is like that, but it’s not.
Here’s an example: Your doctor asks you your weight, and you fixate on a single number. However, every time you drink a glass of water, your weight increases, and when you step on the scale in the morning, the number it shows is less than it was when you went to bed the night before. The “truth” of your weight data is that it’s not a single number, it’s the entire range from immediately after you drink that water to the number the scale shows you at the end of a day of big meals and snacking.
Let’s take the example a step further. As you read this, you might be sitting at your desk in your office. You think you’re in a single location that isn’t moving. However, the Earth is spinning more than 1,000 mph and orbiting the sun at 67,000 mph. You’re actually flying at extreme speeds at all times.
The truth is, everything fluctuates, but our mode of thinking is to go back and embrace the certainty of the noun, where everything is static. It’s just too complicated to imagine that things are always changing, so we cling to the familiar.
MEASURABLE UNCERTAINTY
In scientific research, every number published is required to have a measured uncertainty associated with it. You’re not allowed to report a number unless you can also report how wrong that number might be! What you’re left with is an estimation, and some estimates are better than others.
Credit scores are a great example of measurable uncertainty and fluctuation of data. If you have three credit cards with a $15,000 balance on each, and you get a home equity loan for $50,000 to pay off that $45,000 of debt, you’ve still got $45,000 in debt, but your FICO score just went up! You substituted volatile, short-term debt with stable long-term debt.
The problem is, you haven’t changed. You’re likely going to go use those newly zeroed out credit cards again. You’re probably just as risky as you were yesterday, but the credit scoring model doesn’t think so.
Believing in the absolute authenticity of data becomes counter-productive when you fail to account for fluctuations and uncertainty. If you operate a lawn maintenance business in Michigan, you had better have a financial plan for winter. If you run a bank, you better have an alternative revenue streamlined up for February, March, and April when people don’t take out loans because of tax season. If we weren’t careful about marketing uncollateralized, short-term, small installment loans throughout those months, our clients would be in the red. If we didn’t show them we already understand the natural fluctuations in their bottom line, we wouldn’t have credibility.
ACKNOWLEDGING UNCERTAINTY TO BUILD CREDIBILITY
“It ain’t what you don’t know that gets you into trouble,” Mark Twain said. “It’s what you know for sure that just ain’t so.” In the banking world, we run into trouble when we assume our data is “for sure,” without any chance of it ever fluctuating.
Instead of “the truth,” our goal should be credibility. By accounting for uncertainty, we can achieve credibility by demonstrating the reasoning behind a prediction and setting expectations accordingly. Then, the new data resulting from an action plan shows that you met those expectations.
The truth is never black and white, and neither is data. Credibility occurs when you can set aside the belief of static data, demonstrate and account for the fluctuations that occur, and create a plan for how you will manipulate that data to achieve the desired objective. The truth is your directionality, and your ability to change the trend and skew the fluctuations. Sometimes you trim the fat, and sometimes you just tilt the pinball machine a little bit.
BETTER DATA USE IN ACTION
Here’s an example of how the fluctuating nature of data can affect the outcome of an initiative. Say Deluxe has a small home equity loan upstart client who wants to test a small marketing mailing with about 5,000 people. The client wants to issue the mailing in March, which is not an optimum time because of tax returns.
We do everything we can to help make it successful, even recommending the client space the test out over three weeks. Instead, the client opts for a single mailing. With a small size of 5,000 test subjects, the client may earn 15 loans from the mailing. Or, they could make 25. The statistical difference between 15 and 25 is more than 60 percent — not at all a stable measurement. In this industry, we need large numbers to achieve some level of stability.
Further, outside factors can influence results. For example, terrorist events in France severely affected a few of our staple campaigns in February 2016.
Every time you use data to drive decision-making, it’s like the first time because all data is historical. It’s from yesterday, last month, last year.
For example, interest rates in the housing market just started rising in November. We haven’t had high-interest rates since 2009, and digital platforms and mobile devices have proliferated since then. Digital advertising has evolved markedly in the last eight years. How do you guarantee direct mail success in a high-interest rate environment when we haven’t seen one in eight years, and the tool we’re using didn’t exist in its current form back then?
Campbell’s Law says: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
It’s a bit academic, but it’s the epitome of what can go wrong when solely using quantitative measurements to justify decision-making.
This content is accurate at the time of publication and may not be updated.
RECOMMENDED RESOURCES