Cleaning up Production Jobstreams

C. Clement - (c) 2000

Nightly processing of certain critical applications has
become problematic in many large mainframe-based businesses
for several reasons enumerated below:

1) Support personnel turnover
2) Lack of current, accurate documentation
3) Retention of obsolescent functionality in the code

Downtime due to any given processing failure can easily
result in the loss of a man-month. Consider a failure that
requires programmer attention during non-business hours,
rewuires a rerun, and idles a small department for several
hours on the following day. A methodology based in-part on
academic theory and partly on an early, but very
sophisticated business practice may offer a remedy. In lay-
terms, it takes an approach simililar to that of B.F. Skinner
(in the realm of psychology). In a nutshell, the internals
don't matter.  Only the output (response), given an input
(stimulus).

Implementation of this approach requires encapsulation of the
original (production) process, an alternative process and a
comparison of results that renders a "score" that can be
optimized by the Simplex Method. In practical application we
construct a job that contains a compilation of the new
process, a restoration of input conditions, an execution of
the old process along with the new and finally a comparison
of results that gives a score. When the score reaches a
perfect value (i.e. exactly the same output with zero
differences, a replacment exists that should be free of
unnecessary (obsolete) functionality.

The feedback, consisting of improvements to the alternative
process, maximizes the score and can in some rare cases be
automated. More likely, a programmer will review the result,
make changes, and try again. Such an undertaking typically
requires a man-month and is paid back by the first failure
that is prevented. Successive avoidance of failure is "new
money in the bank".



How I "bagged" Dixie

A specific example is shown below. "DIXIE" is a process that
produces inventory, shipping, and accounting information on a
daily basis. The extant code consists of nightmarish DYL280
modules, unstructured COBOL, weird ESDS files with alternate
indices, and was presumably coded by an insecure individual
back in the 80's and was maintained to-date by countless
analysts and consultants. Another reason for the complexity
was a conversion, added on at the end, to update a new
accounting package.

The new process utilized the original source file layouts and
the final file layouts in one module. Eliminated were
intermediate layouts, header records, sorts, file backups,
etc. Since much of the processing involves small amounts of
data, most processing was performed in storage by loading
tables of reference data. Updates were saved to be applied at
one time to reduce impact to online access and facilitate
rollback. The job had an in-line compiler (known as "compile-
link-and go" in the old days). The compile was followed by
dual execution of the new and old processes and a compare
program (SUPERC in MVS parlance). Everything between
"beginning old" to "ending old" was eventually replaced with
"beginning new" through "beginning end". The programmer, once
having coded the alternative, merely had to toss the job in
each day and clear up the discrepancies for the next run.
The new process goes from format A to format F directly
without going through format B, C, D, and E. Less "moving
parts" equals more reliability.

The end deliverable is a structured Cobol program that does
only what is required, in only one place and serves to
document the exact logical procedure. This facilitates
migration to SAP or any other platform and also provides
reliable processing in the current environment.


JCL example will follow.



                                                                                  .