Server Fiefdoms Inhibit Optimized Business Solutions

In the 1960’s, the IBM mainframe led a transition in business processing from a paper centric transaction processing environment to an IT centric processing environment. The combination of the Personal Computer and introduction of modems and later the Internet changed the IT community from being internally facing to customer centric computing. The introduction of the PC Server created “commodity centric” computing and, typically, folks running in that environment were against IT centric operations as the PC server could bring Department centric computing to individual business units.

The unintended consequence of all this was fiefdom’s were created to manage server silos. Over a decade of server deployments, individual IT organizations may have been created with business related names (e.g. Point of Sale org, Analytic org, Web hosting org, Claims administration org).  The reality is each of these organizations might be dependent on a specific server infrastructure.  As a result, the introduction of any other server infrastructure, for example mainframe to PC server or centralizing on a mainframe from UNIX or PC servers would be viewed as bad. The reality is no single server is capable of meeting all the IT needs of a business unless they are very, very small. And even then, multiple applications, which typically means, multiple server instances or operating system instances, will be required.

I am mainframe centric

I have no hesitation to say I am mainframe centric. That statement, alone and without context, will scare many people away from me as an IT consultant. One of the things I learned very early in my career is that Security of infrastructure is about People, Process and Technology. While the mainframe may be considered the most secure platform, technologically, my forensic experience at a variety of customers proves that poorly trained people and bad processes were the weakest links to security. But more important, much of that “poor security” happened at the end user device – formerly a PC, but now, including Smart devices, such as phones and tablets. If those devices aren’t secured with passwords and enterprise data residing on them isn’t encrypted, then they become the weakest link. And if the user of the device saves their userid and password in their browser so they can reconnect quickly, well so can the bad guy that steals their device and now, the bad guy has unfettered access to those “more secure” systems that execute transactions or provide data access on behalf of the end user that lost their device. I’ve spent over ten years looking at how back end systems can make the front end devices more secure. So I guess I am Security centric, as well. I’m also web, mobile and application development centric.

Most Application Developers are PC Centric.

If you started out as a mainframe programmer, you probably signed on to the mainframe with a 3270 emulator and used panel driven or command line driven tools to edit files and submit jobs to compile and execute the programs you created. The IT capacity that was used to do this type of development drove up the cost of operating the production mainframes.

The advent of the Personal Computer changed all that. Windows and Linux desktops provide graphic user interfaces. Fourth generation tools will help you graphically design the logic of an application and generate the source code in a variety of different programming languages that best suit the operations environment that you might run the program. With open system interfaces and  common programming languages, one development tool might create code to run on dozens of operating systems and hardware architectures. These are the type of tools used to build most middleware that is sold to run across “your favorite” operating system.

Well, that hybrid development environment didn’t end up as simple as that. Tiers of deployment platforms were created. If it was developed on a PC, then the first choice for a deployment platform was typically a PC server. Other platforms, like UNIX and the mainframe, were considered primarily as production platforms. They didn’t distinguish very well or price differently for developers. As a result, it became unaffordable to develop for a mainframe because the development group or a new Middleware vendor, couldn’t afford a mainframe or UNIX server to test their code, so again, by default, most new applications were targeted to PC servers.

Most Web Servers are PC Centric
Most Analytic Servers are PC Centric

Need I go on? A mantra for the client/server computing era was Move the Data to the Application. This led to copies of data everywhere, but also led to theft, loss, data breaches and server sprawl. Virtualization of server operating systems has helped to reduce server sprawl, but security remains complex. Business resilience, environmental needs (floor space, energy, cooling) and labor costs remain highly complex as well.

I said earlier that I am mainframe centric. But I can also say, unequivocally, that the mainframe can NEVER solve all of your business problems by itself. Why is that? Because it is blind and deaf. The 3270 terminal and punch card are long gone as input output devices. The modern mainframe requires a graphically enabled front end device, such as a Point of Sale device, ATM, PC or Smart Device. It still requires communications but now it leverages TCP/IP instead of SNA. So any business leveraging a mainframe is now a multi system business. Even the zEnterprise, with its introduction of the System z Bladecenter EXtension can’t solve all of a business’ problems because it doesn’t handle virtual desktop infrastructure nor manage the deployment of end user devices.

So let’s go back to solving business problems. We don’t need to discuss server types, but we can make some statements that should prove true, irrespective of server deployment model.

  1. Share data – the fewer copies of data, the easier to manage security and resilience. Sharing data for read/write access (transaction processing) along with read only access (Query and Analytics) will enable a combination of workflows that include real time analytics (fraud detection, co-selling) in a basic transaction.
  2. Move applications to data – copying applications is far easier and less time consuming, in addition to more secure and resilient, than moving data. Virtualization technologies enable a simple way to bring applications and data together in the same infrastructure and improve latency and simplify business resilience.
  3. Look for tortured data flows – there never will be a single copy of data, as there should be, at minimum, backup copies and disaster recovery copies. But if you can reduce the number of data moves, leveraging direct access to data, instead of file transfers or unload/reload workflows, a business can dramatically reduce operational complexity.
  4. The fewer parts (servers and data) the better – there will be less environmental costs, software license charges and reduction in complexity for security, capacity management and business resilience management.
  5. Use stateless devices/applications for end user connections – the end user wants direct access to data and transactions, but the less data stored on the end users’ device, the better. Cookies should be the limit of context stored on an end user device. That way, if the device is lost or stolen, no corporate data is lost. It will be stored centrally. This can be true of thin client computers as well as web access to a transaction processing environment.
  6. Never give developers direct access or copies of Production data – Development systems are generally not production systems. There is no logging, limited or no security and rarely an audit of critical data. This is the simplest target for a hacker or insider to attack in order to gain access to personally identifiable information or corporate jewels. Developers should have cleansed data through anonymization tools or other creations to ensure that the production environment remains protected.
  7. Measure availability end to end. I’ve seen server shops (of all types) claim four or five nines of availability for their infrastructure. That’s a nice internal measurement. If the end user, a consumer, is trying to access their data on a back end server and the security server, the web proxy server, a router or some other networking infrastructure is down then the business is down to them. Availability should be a business target not a server only target.
  8. Identify the security of users from end to end. When looking at transaction logs, a business should be able to see the identify of the individual that initiated a request. If the logs only identify the id of a down stream server, then additional logs and correlation is required from more expensive products to identify the end user. The more misdirection the greater the risk to security and data theft. Ensure that the originating user is easily identifiable at each step of the business workflow.
  9. Industry standard benchmarks are irrelevant to business workflows. They may help distinguish one component alternative from another. But since we’ve already determined that a hybrid environment will be required for production purposes, there are very few, if any, benchmarks that provide a true end to end workflow that mimics multiple business operations. Much like lawyers, benchmarks can offer guidance and may explain risks, but they are not the decision makers. The business application owner must weigh the total cost of operations and the incremental costs of maintenance for the entire business operations vs. just the component parts identified by a benchmark.

I first published “Porell’s Pointers” using Lotus Freelance for OS/2 circa 1990. The pointers covered then were server agnostic and remain true today. Sure, they were mainframe centric then and could be considered so today. But they make sense even if a UNIX system is the largest in your organization. They make sense if a PC server is the largest in your organization.

Published circa 1990 via Freelance Graphics for OS/2
Published circa 1990 via Freelance Graphics for OS/2

If you can follow these suggestions, but also think about the scale of operations needed for your business, consider the mainframe as a core component of your end to end workflow. Unlike most other servers, it excels at large scale, managing service level agreements (SLAs) for read/write and read only data, providing cross platform security infrastructure and managing availability and business resilience at an application level. The combination of z/OS, z/VM with Linux for System z, along with PC servers and desktop or Smart Devices will be a winning combination to satisfy the majority of your business problems. But once you look at a hybrid solution, then security, availability, disaster recovery, application development, analytics, capacity management and backup and archive should be cross platform or hybrid solutions as well. A wealth of middleware exists that can operate across platforms. This can be the beginning of a new operations model built on business problems rather than server specific domains. Damn the Server Fiefdoms! Full speed ahead with an organization that collaborates for shared business success.  Happy programming!