Social Networks, Big Data, Data Quality and You!

Not only is social data Big Data, its Quality Data, too!

For the better part of the last two decades I have advised my clients and employers to push hard on their suppliers to adopt either EDI (Electronic Data Interchange) or, in certain more recent cases XML, instead of trading using phone, fax, email or snail mail.  The costs – both in time and in bad data – are too great to continue executing business transations in a completely manual way.  I remember years ago when the web first made it viable, I also started recommending that when suppliers would not trade electronically with them, companies should push the use of web portals my clients could off-load the data entry – and responsibility for bad data – onto their suppliers.  Unfortunately many companies (not my clients, mind you) abandoned electronic trading altogether when their IT shops found supporting portals was easier than supporting electronic trading…but that is a completely different story.

The thing here is the idea of having your supplier handle the entry and validation of data.  They have a vested interest in making sure the invoice information they provide you is complete and accurate or they might not get paid.  In the same way, I see the the social networks, and other companies, that are aggregating information on all their users leveraging the same model…but it is even better for them.

Every one of us must fill out web forms when signing up for almost any service on line.  Information we share includes name, address, email address, age and many other personal tidbits.  The information becomes the basis of the “consumer as a product” business that these companies have adopted where our virtual selves are sold to the highest bidder – either directly for cash or indirectly through targeted advertising and routes.  In a March 11th column for the Huffington Post, former Federal Trade Commissioner Pamela Jones Harbour wrote “To Google, users of its products are not consumers, they are the product.”

Imagine having a “self-building” product.  It is self-aware and concerned about itself so it naturally wants to make sure the information about it is accurate so on a daily basis it makes itself better.  From a data quality standpoint, there are few better ways to make sure your data is accurate.  Make sure the products – er, those that enter the data – have a vested interest in it.  But the data quality aspect of social data is even better!  For the most part, the data collected from on-line searches, click-throughs, web browsing and everything else you do online – whether collected overtly or covertly – will pretty much be accurate – it is near impossible not to be.

Social data, then, is a data managers dream.  The real challenge is what to do with all the data.  With few data quality issues, data managers responsible for social data are left with working with their business counterparts to figure out the best ways to exploit the data they collect.  Which leads me to an old quote from Google’s ex-CEO Eric Schmidt when speaking with The Atlantic: “Google policy is to get right up to the creepy line and not cross it.”  With Chloe Albanesius at PCMag.com reporting that at least one recent company defector thinking that Google+ has ruined Google, perhaps they’ve stuck their nose just across that line afterall.  Quality Big Data can be a scary thing.

All I can say is “Stay behind creepy line, please!”

Advertisements

Bad Data – When to Let it Go?

My friend Cliff recently approached me with a problem.  His organization has tasked him and his team with analyzing, amongst other things, the depth of their bad data problems in advance of replacing their financial systems.  Initial indications are that their data is not just bad, it is very, very bad.  His question?  When is it ok to leave the data behind and not port it over to the new system?  When should he just “let it go”?

In looking at his problem, it is obvious that many problems stem from decisions made long before the current financial system was implemented.  In fact, at first glance, it looks like decisions made as long as 20 years ago may be impacting the current system and threaten to render the new system useless from the get-go.  If you’ve been around long enough you know that storage used to be very costly so fields were sized to just the right length for then current use.  In Cliff’s case, certain practices were adopted that saved space at the time but led to – with the help of additional bad practices – worse problems later on.

When we sat down to look at the issues, some didn’t look quite as bad as they initially appeared.  In fact, certain scenarios can be fixed in a properly managed migration.  For example, for some reason bank routing numbers are stored in an integer field.  This means that leading zeros are dropped.  In order to fix this, scripts have been written to take leading zeros and attach them to the end of the routing number before storing in the field.  Though I haven’t seen it, I’ve got to assume that any use of that same field for downstream purposes includes the reverse procedure in order to creat a legitimate bank routing number.  Of course, when a real bank routing number and a re-combined number end up being the same, there are problems.  He hasn’t yet identified if this is a problem.  If not, then migrating this data should be relatively easy.

Another example is the long-ago decision to limit shipping reference numbers to 10 digits.  There are two challenges to this problem for him.  The first is that many shipping numbers they generate or receive today are 13 digit numbers.  The second is that they generate many small package shipments in sequence, so the last 3 digits often really, really matter.  When reference numbers’ size expanded beyond the 10 the original programmers thought would be enough, a bright soul decided to take a rarely used 3-digit reference field and use that to store the remaining digits.  Probably not a bad problem when they had few significant sequential shipments.  However, since the current system has no way to natively report on the two fields in combination – and for some reason no one was able to write a report to do so – every shipment must be manually tracked or referenced with special look-ups each time they need to check on one.  Once again, this problem should probably be fixable by combining the fields when migrating the data to their new system although certain dependencies may make it a bit more difficult.

Unfortunately, there are so many manual aspects to the current system that 10 years of fat-fingering data entry have led to some version of data neuropathy – where the data is so damaged the organization has trouble sensing it. This numbness becomes organizatoinally painful in day-to-day functioning.

Early on in my data quality life, another friend told me “Missing data is easy. Bad data is hard.”  He meant that if the data was missing, you knew what you had to do to fix it.  You had to find the data or accept it was not going to be there.  But bad data?  Heck, anything you look at could be bad – and how would you know?  That’s difficult.

So, this is Cliff’s challenge.  The data isn’t missing, it is bad.  But how bad?  The two scenarios above were the easy fixes – or could be.  The rest of the system is just as messed up or worse.  Being a financial system it is hard to imaging getting rid of anything.  Yet bringing anything forward could corrupt the entire new system.  And trying to clean all the data looks to be an impossible task.  Another friend was familiar with an organization that recently faced a similar situation.  For them, the answer was to keep the old system running as long as they needed to for reference purposes – it will be around for years to come.  They stood up the new system and migrated only a limited, select, known-good set of data to help jump-start the new system.

This approach sounds reasonable on the surface, but there may be years of manual cross-referencing between the systems – and the users of the old system will need to be suspicious of everything they see in that system.  Still, they have a new, pristine system that, with the right data firewalls, may stay relatively clean for years to come.  How did they know when enough was enough?  How did they know when to let their bad data go?

I’d love to see your thoughts and experiences in this area.