​An Approach To Get Off Of Legacy Dependence  

By Mike Ziman, CEO

element_settings.Image+Text_94464494.default

The number one emotion I usually sense when discussing how to move large complex systems from long-running mainframe legacy systems into more efficient and less expensive environments, is fear. 

The fear is not so much about complacency or change but it seems to be more about the unknown. It is a reasonable fear if you think about trying to gather the business functions by analyzing millions of lines of code. A government agency may have 40+ years of undocumented code and the institutional knowledge is no longer available. The question arises, “How can we document all of the functions of the code?” It is not about the code. It is about the business functions. If it was about the code then we would look at how to improve the code. That is not the task. The task is how the agency meets its mission more effectively and with a focus on cost reduction.

Most everyone (except those with a revenue stream to protect) agrees there are better infrastructure and systems development capabilities in 21st century technology than that of the last century. The agency has the answers and power to make the change within their grasp. As always, “it’s all about the data.” The data stored within the agency is the most powerful part of any agency, not only for delivering mission services but also to transform into a more highly productive and cost-effective entity.


How does the data tell us the business functions necessary to transform the system? All of the automated business functions access data and then perform their task/functions to meet the agency mission. Watch the data  Document all of the access points. Answer the: Who, What When and Where (the How is what we are replacing) is taking place for all data. Sounds overwhelming? There are tools like SPLUNK® Enterprise that perform this very task. The business functional analysis will still have to be done but at least a map generated by data usage will tell you what is going on today. I would not be surprised to find that much of the programming is not doing what was thought, and is actually quite ineffective. Additionally, one may find many inaccuracies within some datasets.
 
What else can be transformed through this discovery? Remember, it’s all about the data. Older legacy systems were built and functionality added over the years often creating new datasets and eventually creating a stovepipe environment. It would not be unusual if the data were not normalized and redundancy occurred or if some data were abandoned in use but not in storage. The logging of the data usage will also allow for normalization and should generate interest in creating a data lake or enterprise data warehouse. Tools like Hadoop® and the use of cloud structures can maintain speed to access. This will also create more efficiency and reduce costs. Additionally, the data in various stovepipes lends itself to be less secure because of the redundant and numerous pipes into the data and the system.

The large complex systems of the 20th century performed amazing services and many still do. However, there are better and less expensive alternatives today. To alleviate the fear of converting legacy code, focus on the data not the code because the goal is to truly transform and throw that old code away.