Maintain or evolve mainframe applications?
Mainframe evolution…The decision to leave the mainframe for a more open and distributed environment, simultaneously recovering and modernizing the applications developed and matured over the years, is a constant struggle between the defenders of the maintenance of current systems against those who propose their modernization and adhesion to the latest technologies and trends.
This dispute is usually reduced to the evaluation and confrontation of the costs and risks of each step of this change, in the short term, with those of maintenance and eventual stagnation, in the medium and long term.
Each side polarizes the risks of the opposite approach, while simultaneously relativizing those pointed out by the opponent.
The debate goes on…
We have witnessed many of these debates, and although we are not completely impartial in this matter, since in general we advocate the complete recovery and renewal of legacy applications, we prefer not to get too involved in this confrontation of risks, since we believe that the organizations themselves are best suited to assess this.
We naturally contribute to these discussions with the knowledge that comes from the modernization experience of different organizations, but always safeguarding the specificities and peculiarities of each case.
When involved in these long and intense debates, we prefer to invest in the exploration of the possible opportunities that each step of change can offer, usually identified through a deep analysis of a current applications’ code and in the identification of the intentions and qualities of the original design, its structure and coding standards.
Inevitably, this process also detects some bad code smells and natural flaws, resulting from the erosion of interventions over the years.
But for these too, the change represents an opportunity to correct and normalize the overall architecture and code.
In the next sections, we briefly illustrate the different arguments and risks usually addressed in each phase of application modernization, as well as the opportunities that usually arise.
Maintain a centralized system or move to a distributed environment?
The first step in any initiative to modernize mainframe-based applications is the decision to move from a centralized to a distributed environment.
The constant increase in mainframe maintenance costs, in particular the high price charged by MIPS, the simultaneous decrease in the supply of human resources with know-how in mainframe technologies, and the exciting possibilities that the most emerging trends, such as the cloud, offer, are always the strongest arguments of proponents of change.
On the other hand, supporters of maintenance generally point out the risks in terms of security, performance and even argue that mainframes already offer integration alternatives for new technologies and languages, such as support for Java or Web Services.
But regardless of the strength of these arguments, they always benefit from a hidden motivation, which is obviously the temptation to inaction.
As long as the need for new features and resources is minimal, doing nothing is always the option that represents lower costs and risks in the short term.
Relativization of security and performance issues.
At this stage of the discussion, we usually contribute to the relativization of security and performance issues.
In terms of security, the different alternatives available in distributed environments are identified, in general as or more powerful than those offered by mainframes.
With regard to performance, the simple demonstration of the advantages and lower costs of the “scale out” of the distributed environments compared to the “scale up” of the mainframes (cloud offers both ways), is usually sufficient to convince the most fervent fans of maintenance.
But it is also at this point that we especially like to emphasize the opportunities that these changes represent in terms of cleaning, normalization and reduction of obsolete application code.
It is usually a reaction of great surprise when organizations become aware of the number of components and amount of code that they continue to, despite their function having been abandoned over the years.
We’ve already had cases of more than 75% of the COBOL programs of an application without any use for its functional offer.
Maintain legacy code or migrate to another language?
Once the reservations about moving the mainframe to a distributed environment have been overcome, the next obstacle has to do with maintaining the legacy code.
Opponents of change generally point out the risks involved in rewriting or modifying consolidated code, which has matured over the years, with the potential disruptions and threats to the business that they may cause.
At this point, it is inevitable to remember one of the main drivers of change, which is the mainframe skills shortage, with corresponding costs rising. Insisting on betting on older programming languages and paradigms, which are no longer taught in universities, poses a threat to the future maintenance and evolution of applications.
As holders and proponents of differentiated code transformation and conversion solutions, we are naturally biased on this issue.
But more than pointing to the issue of skill reduction, we like to highlight the opportunities offered by the change to a new language, in a new paradigm (object-oriented), with a huge and growing offer of integration possibilities and tremendous potential for evolution.
Preserve or evolve the Code structure and Architecture?
Often, we find situations where customers, who even agree to change the programming language, have many reservations and doubts regarding changes in the structure and organization of the code.
In addition to fears about a possible future functional disruption, they point to the learning curve and the difficulties in appropriating migrated applications, if they do not maintain the structure, nomenclature, organization and even the same type of components.
They add that this preservation of the structure and organization of the code represents a decrease in the costs of the migration itself, since it can be reduced to a simple syntactic translation between source and target languages.
And they are not entirely wrong. For today’s application maintenance specialists, the ability to easily recognize the elements and structure in the migrated code with which they were familiar is undoubtedly something they will strongly support.
The problem is the type of code that is obtained in migrations that follow this approach. Usually, as a joke, we like to identify as JOBOL the code resulting from this approach, which is followed by some of our competitors in the area of application migration.
For those who know the target language, namely Java, it will be very difficult to recognize the best practices and qualities in the code produced with the full preservation of COBOL flavors.
Migrating with evolution and improvement
In all migration projects in which we participate, we always reserve some sessions to demonstrate to customers that refactoring the original code, aligning with the strongest aspects of the target language, such as inheritance, polymorphism, encapsulation, etc., always pays off. If it’s true that there are some inevitable risks involved, these are clearly offset by the quality and ease of maintenance of the code of the migrated applications.
Some examples of these code refactorings:
- Replacement of syntactic constructions of difficult translation, such as GO TOs, PERFORM THRUs, NEXT SENTENCE, CONTINUE, CICS HANDLEs, BMS handling, ….
- capture of error handling block patterns and their translation for exception handling
- externalization of constants for configurable resources, including messages and text constants for multi-language support
Architectural interventions also offer many advantages. In particular:
- definition of local frameworks, involving the concepts of the organization’s functional domain,
- normalization of persistence mechanisms – VSAMs, sequential files, queues – through special classes (handlers) that concentrate their management and accesses, allowing their future evolution with minimal impact.
- full separation of business logic from accesses to persistent data, through the data access handlers
- creation of Web MVC architectures, as a target for the migration of 3270 applications based on a pseudo-conversational flow.
New and better components
It is also possible, and even recommended, to intervene at the level of the components obtained from migration, aiming at significantly improving the application’s overall quality.
Examples of such interventions are:
- translation of VSAMs into relational tables
- translation of JCL jobs and chains idiosyncrasies into simple BPMN specifications, much easier to maintain and evolve.
- replacement of JCL utility programs by application methods, hosted on the data access layer handlers.
- layout improvement of web pages obtained from BMS, to support responsive layouts and replacement of basic labels and text boxes by advanced widgets like calendars, grids and combos.
If it’s true that it inevitably involves more risks, differentiated and fully customized premium migrations, offer a unique opportunity to rejuvenate, revitalize and improve the overall quality of mainframe legacy applications.
To achieve this, it’s not only necessary to identify and mitigate these risks, but also to be constantly aware of the opportunities for improvement that can be made on the legacy code, architecture and components.
This is the attitude that has guided us through many different migration projects, in different environments, industries and continents.
The successes achieved so far support our growing belief that it is always worth raising the bar and trying to achieve the best quality levels for the applications resulting from migrations.