Old Tech = Unhappy Employees

Are you losing employees because your technology is old and outdated?  Wherever you are in the world, whatever your industry, technology laggards are more likely to have frustrated staff that feels hamstrung by their tools.  Do you think they are as productive as they can be?  I recently wrote this entry for a partner’s website.  It is based on this research that my current employer, Unisys, released in 2018.

 Case management platform, workflow, and process automation tools can help your organization turn that around quickly. Similarly, development and operations approaches including Agile and DevOps can help you develop and sustain custom built solutions more effectively with improved security and a higher quality of code (fewer code problems with deployed code results in less rework and less impact on the user base).  Combining the two can further expedite your modernization efforts, and often reduce the costs of change.

When undertaking technology improvements, consider modernizing your business technology to extend your application reach to smartphones and other mobile devices.  This can help ensure security and governance where it otherwise might not exist – a potential risk for your organization and its data.  Finally, automating your business processes, or modernizing those that already are, can also improve data quality through reduced manual data entry and new data management tools.

 

3D Printing, IP, Supply Chains and the Environment

I know the title sets up for a long and potentially arduous and probably cumbersome discussion, but I’m not planning on tackling everything here…just some food for thought and discussion.

I don’t know about you, but I’m really excited about the idea of 3D printing.  While I can’t expect to say “Tea, Earl Grey, hot” and expect such a cup to be ready for me quickly, I like the idea of being able to print something useful when needed.  If nothing else it seems like a great toy – though I fully understand it is far beyond that stage already.  Reading this Wired article about Solidoodle makes me understand how far it has come – as $500 personal 3D printers are here!  My first two thoughts were 1) what is this going to do to global supply chains for basic, reusable products and 2) what are the Intellectual Property (IP), patent and trademark issues when someone takes an image off the web and prints it?

Let’s start with the second question.  We have seen vigorous defenses of copyrights for entertainment titles – music and movies for instance – against groups that would otherwise circumnavigate those copyrights.  And these battles have been fought in a variety of different settings in which titles might be replicated, duplicated, shared and backed up.  What happens with 3D printing will be very interesting to see.  In the Solidoodle (I just love that name!) article they discuss a 3D Yoda model that they printed.  What does Lucasfilm think of this, I wonder?  In a more mundane question, is the bottle opener they print in 15 minutes using $.15 worth of material something of their own design or did they copy it from something they purchased?  I see lawyers having a field day in the 3D printing realm – if they could just know each individual who actually printed something that came from someone else’s IP.

Knowing our litigious society I think the following scenario might even play out.  Since the means to know who downloaded what is something that can be tracked today, perhaps 3D printing applications will need to do this and report back.  Or if just downloading isn’t considered an issue, perhaps the $500 printer may need an additional piece of software that shares information about anything it prints to an IP database someplace.  The image can be compared to images of known copyrighted, patented or trademarked work and then the users of such technology can be billed for the use of the IP.  I feel slimy just thinking about this, but it strikes me that there will be a huge battle in this arena in the not too distant future.  For a good, less sinister read on this whole area try 3D Printing and Intellectual Property by Max Maheu.

The other thing that strikes me is that, extrapolated out many years and surviving IP battles, I could see supply chains for mundane replicable goods being replaced by supply chains for 3D printer supplies.  You need to replace your bottle opener?  Just print it.  If you run out of your filament printing stock, just order it.  You don’t have a printer, go to the local library and create it there.  What this means is the extended supply chains for any replicable goods may shrink significantly.  Your home office will become a mini-manufacturing plant for mundane items and the costs – if the $.15 bottle opener is any indication – will be relatively low.

If the extended supply chain for these products from China and various developing nations significantly contracts, those production resources will be freed up to focus on other more complex products.  In many ways this seems to be a good thing as it will bring efficiency to manufacturing replicable items (and perhaps more complex things down the road), reduce environmental costs associated with transportation of all those products, and help focus factories on what could be more high margin products.

Still, on the environmental front, you have to ship the filament so there is transportation costs to that. Thus, it might be appropriate to ask where the filament is created and shipped from.  Also, we might ask about the non-supply chain environmental impact of these 3D printers and their filaments.  What is the makeup of the filament used in this production process and what are the up front and long-term environmental and human consequences of these types of materials and the products made from them?  And, as Joris Peels ponders, will the availability of cheap, quickly printed goods lead to a preponderance of “Impulse 3D print” where products become throw-away items and thus create considerable waste?

I am excited about the technology but cautiously optimistic about the environmental aspects.  Regarding supply chains, I think local production for appropriate products can be a very useful thing, reduce dependence on foreign manufacturing and brittle supply chains, and perhaps spur new ideas – and products – like we’ve never seen before as consumers become their own designers and entrepreneurs as they try print and hone their own ideas.  And this latter idea – the use of 3D for creativity not just for replication – is the thing that  I think holds the most promise from this new technology.  I am hopeful.

 

Social Networks, Big Data, Data Quality and You!

Not only is social data Big Data, its Quality Data, too!

For the better part of the last two decades I have advised my clients and employers to push hard on their suppliers to adopt either EDI (Electronic Data Interchange) or, in certain more recent cases XML, instead of trading using phone, fax, email or snail mail.  The costs – both in time and in bad data – are too great to continue executing business transations in a completely manual way.  I remember years ago when the web first made it viable, I also started recommending that when suppliers would not trade electronically with them, companies should push the use of web portals my clients could off-load the data entry – and responsibility for bad data – onto their suppliers.  Unfortunately many companies (not my clients, mind you) abandoned electronic trading altogether when their IT shops found supporting portals was easier than supporting electronic trading…but that is a completely different story.

The thing here is the idea of having your supplier handle the entry and validation of data.  They have a vested interest in making sure the invoice information they provide you is complete and accurate or they might not get paid.  In the same way, I see the the social networks, and other companies, that are aggregating information on all their users leveraging the same model…but it is even better for them.

Every one of us must fill out web forms when signing up for almost any service on line.  Information we share includes name, address, email address, age and many other personal tidbits.  The information becomes the basis of the “consumer as a product” business that these companies have adopted where our virtual selves are sold to the highest bidder – either directly for cash or indirectly through targeted advertising and routes.  In a March 11th column for the Huffington Post, former Federal Trade Commissioner Pamela Jones Harbour wrote “To Google, users of its products are not consumers, they are the product.”

Imagine having a “self-building” product.  It is self-aware and concerned about itself so it naturally wants to make sure the information about it is accurate so on a daily basis it makes itself better.  From a data quality standpoint, there are few better ways to make sure your data is accurate.  Make sure the products – er, those that enter the data – have a vested interest in it.  But the data quality aspect of social data is even better!  For the most part, the data collected from on-line searches, click-throughs, web browsing and everything else you do online – whether collected overtly or covertly – will pretty much be accurate – it is near impossible not to be.

Social data, then, is a data managers dream.  The real challenge is what to do with all the data.  With few data quality issues, data managers responsible for social data are left with working with their business counterparts to figure out the best ways to exploit the data they collect.  Which leads me to an old quote from Google’s ex-CEO Eric Schmidt when speaking with The Atlantic: “Google policy is to get right up to the creepy line and not cross it.”  With Chloe Albanesius at PCMag.com reporting that at least one recent company defector thinking that Google+ has ruined Google, perhaps they’ve stuck their nose just across that line afterall.  Quality Big Data can be a scary thing.

All I can say is “Stay behind creepy line, please!”

Bad Data – When to Let it Go?

My friend Cliff recently approached me with a problem.  His organization has tasked him and his team with analyzing, amongst other things, the depth of their bad data problems in advance of replacing their financial systems.  Initial indications are that their data is not just bad, it is very, very bad.  His question?  When is it ok to leave the data behind and not port it over to the new system?  When should he just “let it go”?

In looking at his problem, it is obvious that many problems stem from decisions made long before the current financial system was implemented.  In fact, at first glance, it looks like decisions made as long as 20 years ago may be impacting the current system and threaten to render the new system useless from the get-go.  If you’ve been around long enough you know that storage used to be very costly so fields were sized to just the right length for then current use.  In Cliff’s case, certain practices were adopted that saved space at the time but led to – with the help of additional bad practices – worse problems later on.

When we sat down to look at the issues, some didn’t look quite as bad as they initially appeared.  In fact, certain scenarios can be fixed in a properly managed migration.  For example, for some reason bank routing numbers are stored in an integer field.  This means that leading zeros are dropped.  In order to fix this, scripts have been written to take leading zeros and attach them to the end of the routing number before storing in the field.  Though I haven’t seen it, I’ve got to assume that any use of that same field for downstream purposes includes the reverse procedure in order to creat a legitimate bank routing number.  Of course, when a real bank routing number and a re-combined number end up being the same, there are problems.  He hasn’t yet identified if this is a problem.  If not, then migrating this data should be relatively easy.

Another example is the long-ago decision to limit shipping reference numbers to 10 digits.  There are two challenges to this problem for him.  The first is that many shipping numbers they generate or receive today are 13 digit numbers.  The second is that they generate many small package shipments in sequence, so the last 3 digits often really, really matter.  When reference numbers’ size expanded beyond the 10 the original programmers thought would be enough, a bright soul decided to take a rarely used 3-digit reference field and use that to store the remaining digits.  Probably not a bad problem when they had few significant sequential shipments.  However, since the current system has no way to natively report on the two fields in combination – and for some reason no one was able to write a report to do so – every shipment must be manually tracked or referenced with special look-ups each time they need to check on one.  Once again, this problem should probably be fixable by combining the fields when migrating the data to their new system although certain dependencies may make it a bit more difficult.

Unfortunately, there are so many manual aspects to the current system that 10 years of fat-fingering data entry have led to some version of data neuropathy – where the data is so damaged the organization has trouble sensing it. This numbness becomes organizatoinally painful in day-to-day functioning.

Early on in my data quality life, another friend told me “Missing data is easy. Bad data is hard.”  He meant that if the data was missing, you knew what you had to do to fix it.  You had to find the data or accept it was not going to be there.  But bad data?  Heck, anything you look at could be bad – and how would you know?  That’s difficult.

So, this is Cliff’s challenge.  The data isn’t missing, it is bad.  But how bad?  The two scenarios above were the easy fixes – or could be.  The rest of the system is just as messed up or worse.  Being a financial system it is hard to imaging getting rid of anything.  Yet bringing anything forward could corrupt the entire new system.  And trying to clean all the data looks to be an impossible task.  Another friend was familiar with an organization that recently faced a similar situation.  For them, the answer was to keep the old system running as long as they needed to for reference purposes – it will be around for years to come.  They stood up the new system and migrated only a limited, select, known-good set of data to help jump-start the new system.

This approach sounds reasonable on the surface, but there may be years of manual cross-referencing between the systems – and the users of the old system will need to be suspicious of everything they see in that system.  Still, they have a new, pristine system that, with the right data firewalls, may stay relatively clean for years to come.  How did they know when enough was enough?  How did they know when to let their bad data go?

I’d love to see your thoughts and experiences in this area.

Data Quality and “Damn You, Auto Correct!”

This past week a Georgia high school student sent a text message that was supposed to read “gunna be at west hall today”.  Unfortunately the auto-correct feature on his smart phone changed the first word to “gunman”.  To make matters worse, he evidently fat fingered the address he sent the text to so the person receiving it didn’t know who he was so they took the appropriate cautionary step and contacted the police.  The result?  A high school was shut down for a few hours until things were straightened out.

There are so many directions to go with this.  We could focus on the auto-correct feature and how this tool seems to have caused much frustration and laughter by automatically mistyping things for people – though I do question how many items at a site like “damnyouautocorrect.com” (DYAC) are truly accidental.  Or we could focus on the myriad of interesting things I found while researching this blog.  How about the site that teaches you How to Make Siri Curse Like a Sailor (careful if you are easily offended) or Tips to Add Words to iPhone’s Dictionary (for older versions of iPhone) and iOS 5: How to add words to the auto-correct dictionary for iOS5.  Or, if you like the way the Android presents predictive text with the keyboard bar, Enable iOS 5’s Android-Like Autocorrect Keyboard Without Jailbreak does just what it says it does.

Or we could discuss the fat fingering of the address.  This actually is of most interest to me.  I’ve researched keystroke error rates in the computer space before and concluded the average typist fat fingers a keystroke about 5% of the time while professionals might do so 1%-2% of the time.  The thing about these percentages is they can be very deceiving.  5% error rates can explode in the wrong situations.  For instance, imagine a 10 character-long field.  If you had to fill that field in 10 times and you averaged 5% error rate in your typing, the best effective ongoing error rate you could achieve would be 10% because across 10 fields you make all the errors in one field (5 of the 10 characters are wrong in that one field) but all the other fields are correct.  1 in 10 fields being incorrect is a 10% error rate.

At worst your effective error rate would be 50%.  This would occur if you made one keystroke error in each of 5 fields since you will be typing 100 characters and you have a 5% error rate.  If 5 of 10 fields each have 1 error, your overall error rate across the fields is 50%.  Corporate data can go really bad really quickly in this scenario.  Of course most systems have found ways to limit these types of issues through check boxes and drop-down lists and other validations.  But sometimes data entry can’t be avoided.  Entering customer contact information is one such situation.  Get 5 in 10 email addresses wrong and there goes your email marketing campaign.  Even if you are careful error rates can be significant.  I read a study some time ago that indicated that careful typists often catch their mistakes, so professionals usually average only 1 in 300 unfound errors.  That’s much better, but still can translate into problems for your enterprise.  Using the same 10 character field, 1 in 300 errors would equate to 1 in 30 fields being erroneous.  That’s over 3% – still pretty bad, but I know some organizations that would love to have data quality problems with only 3% of their data.

So this, now, brings me back to where we started.  I’ve got to believe that without autocorrect, the keystroke error rates on smart phones is significantly higher than it otherwise would be.  Typing on those tiny keyboards is always a pain for me.    I’ve found on my phone that the right place to touch in order to get the letter I want is just to the left of the letter – not right on it.  I’m constantly back-spacing and retyping.  Perhaps this is why there are so many bizarre autocorrect examples in the world (no, I’m not saying I’m responsible for all of them – just that other people must have similar challenges to mine, don’t they?).  I miss the Blackberry keypad!

I think organizations should be very wary of leveraging smart phones for serious business apps that require data entry, unless they have extremely strong user data validation methodologies. Because I expect few developers to be so vigilant in their app development, I believe the future could bring some very unfortunate results from using business apps on smart phones.  While DYAC entries sure can be funny (warning, they can also be quite vulgar) the same errors in business transactions could be catastrophic for your enterprise.  Imagine things going terribly wrong and facing a lawsuit and having to use the “damnyouautocorrect” defense.  You might end up in a much worse situation than a couple hours of high school lockdown.