Server Fiefdoms Inhibit Optimized Business Solutions

In the 1960’s, the IBM mainframe led a transition in business processing from a paper centric transaction processing environment to an IT centric processing environment. The combination of the Personal Computer and introduction of modems and later the Internet changed the IT community from being internally facing to customer centric computing. The introduction of the PC Server created “commodity centric” computing and, typically, folks running in that environment were against IT centric operations as the PC server could bring Department centric computing to individual business units.

The unintended consequence of all this was fiefdom’s were created to manage server silos. Over a decade of server deployments, individual IT organizations may have been created with business related names (e.g. Point of Sale org, Analytic org, Web hosting org, Claims administration org).  The reality is each of these organizations might be dependent on a specific server infrastructure.  As a result, the introduction of any other server infrastructure, for example mainframe to PC server or centralizing on a mainframe from UNIX or PC servers would be viewed as bad. The reality is no single server is capable of meeting all the IT needs of a business unless they are very, very small. And even then, multiple applications, which typically means, multiple server instances or operating system instances, will be required.

I am mainframe centric

I have no hesitation to say I am mainframe centric. That statement, alone and without context, will scare many people away from me as an IT consultant. One of the things I learned very early in my career is that Security of infrastructure is about People, Process and Technology. While the mainframe may be considered the most secure platform, technologically, my forensic experience at a variety of customers proves that poorly trained people and bad processes were the weakest links to security. But more important, much of that “poor security” happened at the end user device – formerly a PC, but now, including Smart devices, such as phones and tablets. If those devices aren’t secured with passwords and enterprise data residing on them isn’t encrypted, then they become the weakest link. And if the user of the device saves their userid and password in their browser so they can reconnect quickly, well so can the bad guy that steals their device and now, the bad guy has unfettered access to those “more secure” systems that execute transactions or provide data access on behalf of the end user that lost their device. I’ve spent over ten years looking at how back end systems can make the front end devices more secure. So I guess I am Security centric, as well. I’m also web, mobile and application development centric.

Most Application Developers are PC Centric.

If you started out as a mainframe programmer, you probably signed on to the mainframe with a 3270 emulator and used panel driven or command line driven tools to edit files and submit jobs to compile and execute the programs you created. The IT capacity that was used to do this type of development drove up the cost of operating the production mainframes.

The advent of the Personal Computer changed all that. Windows and Linux desktops provide graphic user interfaces. Fourth generation tools will help you graphically design the logic of an application and generate the source code in a variety of different programming languages that best suit the operations environment that you might run the program. With open system interfaces and  common programming languages, one development tool might create code to run on dozens of operating systems and hardware architectures. These are the type of tools used to build most middleware that is sold to run across “your favorite” operating system.

Well, that hybrid development environment didn’t end up as simple as that. Tiers of deployment platforms were created. If it was developed on a PC, then the first choice for a deployment platform was typically a PC server. Other platforms, like UNIX and the mainframe, were considered primarily as production platforms. They didn’t distinguish very well or price differently for developers. As a result, it became unaffordable to develop for a mainframe because the development group or a new Middleware vendor, couldn’t afford a mainframe or UNIX server to test their code, so again, by default, most new applications were targeted to PC servers.

Most Web Servers are PC Centric
Most Analytic Servers are PC Centric

Need I go on? A mantra for the client/server computing era was Move the Data to the Application. This led to copies of data everywhere, but also led to theft, loss, data breaches and server sprawl. Virtualization of server operating systems has helped to reduce server sprawl, but security remains complex. Business resilience, environmental needs (floor space, energy, cooling) and labor costs remain highly complex as well.

I said earlier that I am mainframe centric. But I can also say, unequivocally, that the mainframe can NEVER solve all of your business problems by itself. Why is that? Because it is blind and deaf. The 3270 terminal and punch card are long gone as input output devices. The modern mainframe requires a graphically enabled front end device, such as a Point of Sale device, ATM, PC or Smart Device. It still requires communications but now it leverages TCP/IP instead of SNA. So any business leveraging a mainframe is now a multi system business. Even the zEnterprise, with its introduction of the System z Bladecenter EXtension can’t solve all of a business’ problems because it doesn’t handle virtual desktop infrastructure nor manage the deployment of end user devices.

So let’s go back to solving business problems. We don’t need to discuss server types, but we can make some statements that should prove true, irrespective of server deployment model.

  1. Share data – the fewer copies of data, the easier to manage security and resilience. Sharing data for read/write access (transaction processing) along with read only access (Query and Analytics) will enable a combination of workflows that include real time analytics (fraud detection, co-selling) in a basic transaction.
  2. Move applications to data – copying applications is far easier and less time consuming, in addition to more secure and resilient, than moving data. Virtualization technologies enable a simple way to bring applications and data together in the same infrastructure and improve latency and simplify business resilience.
  3. Look for tortured data flows – there never will be a single copy of data, as there should be, at minimum, backup copies and disaster recovery copies. But if you can reduce the number of data moves, leveraging direct access to data, instead of file transfers or unload/reload workflows, a business can dramatically reduce operational complexity.
  4. The fewer parts (servers and data) the better – there will be less environmental costs, software license charges and reduction in complexity for security, capacity management and business resilience management.
  5. Use stateless devices/applications for end user connections – the end user wants direct access to data and transactions, but the less data stored on the end users’ device, the better. Cookies should be the limit of context stored on an end user device. That way, if the device is lost or stolen, no corporate data is lost. It will be stored centrally. This can be true of thin client computers as well as web access to a transaction processing environment.
  6. Never give developers direct access or copies of Production data – Development systems are generally not production systems. There is no logging, limited or no security and rarely an audit of critical data. This is the simplest target for a hacker or insider to attack in order to gain access to personally identifiable information or corporate jewels. Developers should have cleansed data through anonymization tools or other creations to ensure that the production environment remains protected.
  7. Measure availability end to end. I’ve seen server shops (of all types) claim four or five nines of availability for their infrastructure. That’s a nice internal measurement. If the end user, a consumer, is trying to access their data on a back end server and the security server, the web proxy server, a router or some other networking infrastructure is down then the business is down to them. Availability should be a business target not a server only target.
  8. Identify the security of users from end to end. When looking at transaction logs, a business should be able to see the identify of the individual that initiated a request. If the logs only identify the id of a down stream server, then additional logs and correlation is required from more expensive products to identify the end user. The more misdirection the greater the risk to security and data theft. Ensure that the originating user is easily identifiable at each step of the business workflow.
  9. Industry standard benchmarks are irrelevant to business workflows. They may help distinguish one component alternative from another. But since we’ve already determined that a hybrid environment will be required for production purposes, there are very few, if any, benchmarks that provide a true end to end workflow that mimics multiple business operations. Much like lawyers, benchmarks can offer guidance and may explain risks, but they are not the decision makers. The business application owner must weigh the total cost of operations and the incremental costs of maintenance for the entire business operations vs. just the component parts identified by a benchmark.

I first published “Porell’s Pointers” using Lotus Freelance for OS/2 circa 1990. The pointers covered then were server agnostic and remain true today. Sure, they were mainframe centric then and could be considered so today. But they make sense even if a UNIX system is the largest in your organization. They make sense if a PC server is the largest in your organization.

Published circa 1990 via Freelance Graphics for OS/2
Published circa 1990 via Freelance Graphics for OS/2

If you can follow these suggestions, but also think about the scale of operations needed for your business, consider the mainframe as a core component of your end to end workflow. Unlike most other servers, it excels at large scale, managing service level agreements (SLAs) for read/write and read only data, providing cross platform security infrastructure and managing availability and business resilience at an application level. The combination of z/OS, z/VM with Linux for System z, along with PC servers and desktop or Smart Devices will be a winning combination to satisfy the majority of your business problems. But once you look at a hybrid solution, then security, availability, disaster recovery, application development, analytics, capacity management and backup and archive should be cross platform or hybrid solutions as well. A wealth of middleware exists that can operate across platforms. This can be the beginning of a new operations model built on business problems rather than server specific domains. Damn the Server Fiefdoms! Full speed ahead with an organization that collaborates for shared business success.  Happy programming!

Apple and IBM Mobile solution deal could be a boon to z/OS as well

IBM has announced their strategy is around Big Data Analytics, Cloud and Mobile Computing. Apple has made an art of mobile computing through it’s iOS devices and App Stores. It’s clear that IBM wants to capitalize on that success and Apple wants to further entrench itself in the enterprise.

Going back in time, it’s not the first collaboration between IBM and Apple. The most famous relationship was Apple’s use of the PowerPC chip in earlier Macintosh computers.

Lessor known was a consortium, led by Oracle and including IBM, Apple, Sun, Alcatel and Netscape to promote the Network Computer Reference Profile, which would now be considered a Thin Client computer. This was circa 1996. This consortium was created to help attack the “lock” that Windows and Intel had on corporate desktops. The Network Computer was to be a less expensive alternative to PC’s and be centrally managed.
I had the pleasure of demonstrating a Thin Client connecting to an IBM mainframe at the launch of the reference profile at the Moscone Center in San Francisco. The original IBM thin client was developed with the AS/400 organization (now System i) in Rochester, MN. They really understood the value and importance of these devices as the 5250 was just recently declared a “dead technology” by IBM and the AS/400 needed a new connection device. I was enamored (and still am) by the ability of a thin client to provide the best type of access to a server, with graphics and multi-media support, but leaving the device as stateless. In other words, no data is downloaded to the device so there is less risk of loss or mis-use of corporate data. I worked closely with Apple to ensure they had a 3270 emulator available for businesses to connect via Macintosh desktops.

Ultimately, this consortium failed. The chips were too slow. It was before Linux was becoming ubiquitous for embedded devices. There were too many embedded home grown operating systems that weren’t extendable nor easy to update with the latest web browsers and terminal emulators. They were probably ahead of the curve. Now, Thin Client computers are affordable an an attractive alternative to PC’s. I’ll be writing about mainframe possibilities in a future post.

Back to IBM and Apple. It may seem odd, but there’s a link to IBM’s mainframe operating system, z/OS as well. z/OS began in the 1970’s as the MVS operating system. IBM sold a base operating system and then a collection of piece parts on top of it: storage management, communications, transaction processing, databases, compilers, job schedulers, etc. These products were individually purchased, managed and operated. There was quite a collection of support personnel required back then. In the 1990’s, along came OS/390 and now its successor, z/OS. The same products are now integrated into a single system. This simplified installation, customization, ordering and management. It reduced problem resolution and improved up time. It has been a predictable schedule from IBM that demonstrated business value in each release. In many cases, the cost for a release remained the same or less than the prior release, to ease adoption and provide the customer with a technology dividend.

Let’s compare that to the evolution of Apple’s MacOS. There’s an awful lot of function in a single operating system image as well. At the kernel level, it can process the bash shell and look a lot like a Linux system. There is a tremendous amount of middleware shipped at no additional charge with the system. Recently, I was trying to run FTP and some other basic, UNIX-like, commands on Microsoft Windows. I found that if the code was available, it was not comparable and would require buying additional software to get a comparable level of support to the Mac. At the end user interface, there is a tremendous amount of functionality. The installation of new applications is simple. The systems management, at a high level, is fairly simple, but can be very complicated when it needs to be for advanced users. Apple’s iOS mobile system inherits many of these characteristics as well, but on a different chip, the ARM chip rather than the  64 bit x86 Intel chip. The ARM chip is a topic for another day.

Both z/OS and MacOS get labeled as expensive, out of the box, when compared to other systems. Each suffer from a lack of understanding of the true cost of ownership for their respective hardware and software systems. Both have excellent security reputations. Both run at higher levels of availability than the systems with which they are being compared. Looking inside those statements, a business must recognize that there is certainly an element of technology applied to those values, but also people and processes. Neither system is hacker proof, nor truly fault tolerant. However, through their wise use of technology, including close collaboration between hardware and software development and automation of processes, each of these systems can handle difficulties or risks that other “comparable” systems cannot handle.  It’s always been hard to put a price on “down time” or avoiding planned and unplanned outages. While it’s true that some folks complain about the skills set necessary to operate a mainframe is “different” than other platforms, there are generally fewer people required to manage the aggregate amount of work running on a mainframe than the comparable amount of work running on other systems. The reality is people can learn to work with anything, as long as they are allowed.

One of the biggest differences between the IBM mainframe and Apple systems is that IBM no longer supplies the end user device necessary to operate and work with the mainframe. There is no “face” to the mainframe. Originally, the mainframe was accessed by a 3270 “green screen” command line terminal, or worse, punch cards. Now 3270 emulators are available on just about every “smart device” there is. The 3270 interface remains a boring command line interface out of the box. Graphics and multimedia support, as well as touch pads and speech recognition are now state of the art for end user computing. z/OS needs an upgrade in that arena. New technologies, such as the z/OS Management Facility, have built in web access to do a variety of functions. IBM and independent software vendors have taken advantage of this new capability to further improve and reduce the skills necessary for the management experience of z/OS.

There can be even greater synergy, in the areas of transaction pricing and application development.

Back in May, IBM announced new mobile workload pricing for z/OS. Given the volume of new mobile applications and the fact that many of them might be just a query (e.g. Account balance, order tracking), and not revenue generating to a business, IBM is offering discounted pricing and lower measured usage pricing for these types of transactions. This could be a significant benefit to existing customers and future transactions that are developed to exploit the mobile technologies. The measurement tool began shipping on June 30th.

The IBM Rational organization, responsible for application development tools, acquired Worklight, a product tailored to enable rapid deployment across a variety mobile computing operating systems with a single code base. Now integrated with the suite of Rational products, businesses can build applications that support back end transaction processing and database serving on the mainframe, while at the same time, providing new end user accesses and experiences from a wide range of smart computing devices, including phones, tablets and desktops.

The only thing to stop this integration of the best of both worlds might be a business itself. By isolating development, deployment and operational teams by platform silos e.g. mainframe vs. Unix. vs. PC’s vs. BYOD, a company could actually have the unintended consequence of blocking integration and overall improvement of their business’ end user experience both internally and for their customer’ experiences.

Here’s to hoping that the Apple – IBM mobile collaboration is significant enough to get businesses to open up their apertures so that true integration of systems can occur, from the end device all the way through to their business critical systems. The reality becomes the end to end workflow is their mission critical business, not just one server or device vs. another. When fewer systems are involved in the overall integration of business processes, a business should see benefits in their overall security, availability and easier compliance toward business initiatives. Happy programming.

Rocket Software is Making z/OS more Open

Twenty years ago, IBM introduced OpenEdition MVS, their first foray into “opening” the mainframe to a new community of developers. This release included the Shell and Utilities priced feature.  Production cost varied with the size of the mainframe. If you consider that only a handful of people might actually use this code when originally shipped, the “cost per seat” was astronomical compared to what was free or inexpensive on desktop systems.  This was corrected when IBM began shipping this feature as part of the base of the new OS/390 operating system. This dramatically reduced the cost and skills needed for new workload development on the mainframe for customers and vendors. But without the revenue associated with the previously priced feature, IBM didn’t keep up with the open source community and quickly, these tools fell behind. This was an unintended consequence.

Over the years, IBM worked to resolve this through relations with other companies and their own developers, but the net was the code was still aging, until they met with Rocket Software. Rocket has been in the business of supporting  mainframe customers for over twenty five years. IBM found that Rocket was using open source tools within their own z/OS development team. Given the gap in true “openness” for z/OS, Rocket decided to release their source modifications and z/OS binaries into the open source community. Through the Rocket web site, any business can download the z/OS binaries at no charge, just as they might do with Linux offerings. If a business is looking for support of those binaries, a fee offering is available, just as one might find from the paid Linux distro providers.

Rocket originally provided five ported tools as a trial last year. This month, Rocket has delivered over four times that number of tools. This re-opens the Unix System Services development environment of z/OS. This latest group of ported tools can be utilized to bring more open source middleware and utilities to z/OS, by customers, other vendors or Rocket Software. Rocket is working to provide a level of skills portability across platforms and ease the knowledge base required to create, build and operate on the mainframe, regardless of z/OS, Linux or z/VM operating system deployment. Rocket has also developed Application Lifecycle Management  for Linux on System z. This new offering is currently available as a beta offering. It’s goal is to provide greater management of Linux applications that are natively developed and managed on and from the mainframe.

Now, let’s dream how the new ported tools can be used on z/OS. Some basic items: make will help you take other open source code and get that built for z/OS.  If you are considering some of your own development activities on z/OS, cvs can be deployed as a source code library management tool. In every instance, it’s all about how the use of open source software can be integrated with existing applications and databases to create something new that’s better than a collection of software that runs across platforms. Websphere developers that work on Linux or Windows systems will find some of these new tools will add value and ease deployment and improve skills portability for building applications for z/OS. If you really want to go crazy, the Apache web server is now part of z/OS. Add in PHP and DB2 and you can have WordPress running on z/OS. Now why have WordPress? You might integrate directly into your business applications.

Rocket’s not done adding to this list. If you ask nicely, they might be willing to give you an update to bash – a shell program that’s common on all Linux and the MacOS system. In fact, if there are other tools that you are interested in, let them know via their contact site. The ported tools can be accessed here. The Application Management Lifecycle for Linux tool can be accessed by sending an email here. Happy programming.

Unintended Consequences

I’ve found that many times in my career, a decision that was made for one reason, had unintended consequences in another area. Sometimes, these were good things and sometimes, they were not. I’ve decided to write about some of these activities in this blog. So you’ll see this title, as a recurring theme throughout my writings.

Here’s a list of the items I’m thinking about writing. Let me know what you think is most interesting to you and I’ll try to get them done earlier than the others:

  1. z/OS “stabilizes” it’s Shell and Utilities offerings at very old code levels- Rocket Software “fixes” that.  Done.
  2. OS/390 and z/OS are a better package, but they lost their sales channel. Now Solution Editions and new workloads help to drag z/OS. TCO and High Availability remain king.
  3. Apple and IBM mobile deal is pretty cool, but reminds me that Apple MacOS and z/OS are a lot alike – tons of value in a single package  – Done
  4. Use of z/OS Unix System Services introduces “surrogate” security – which might end up giving too much authority to an individual – what can be done to reduce that risk.
  5. MVS and zVM might have been considered the first cloud platform, but no one originally marketed it that way. Now, ASG’s Cloudfactory provides an Amazon Web services like front end for z/OS workloads. Done
  6. The IBM Mainframe is advertised as hacker proof, but the weakest link is not the mainframe, it’s the end user interface and people using them. What can be done to help prevent problems? Use of Intellinx zWatch is one method that a wide range of customers use to prevent human errors across platforms.
  7. Application development on the mainframe wasn’t always as simple as it was before the IBM Rational products came along and the Unit Test feature was added, which is also known as  the zPDT . This was difficult to bring to market. For the first time,  IBM separated development pricing from production pricing.
  8. Linux is ported to S/390 in December 1999.  Novell is offered the opportunity to be the first vendor on Linux on S/390. They say no.
  9. Human Resource lessons learned in a 30+ year career.
  10. High availability lessons learned. It’s not always the technology, it’s the process.
  11. Multi Level Security – probably the answer to a lot of cloud sharing problems, but no one knows what it is or does. It’s in production in some very secure locations today. Done.
  12. Thin Client Computing and usage with Mainframes
%d bloggers like this: