Modern Data Usage Patterns – a case for Data consolidation

Last week, I wrote about organizational fiefdom’s and how they can inhibit efficiency in deployment models.

This week, I’ll describe a couple of data patterns that are common across several business models. Most important, they can take advantage of a hybrid deployment model and some unique System z characteristics that can result in a dramatic reduction in operational and security overhead and simplify compliance to a wide variety of government, industry and business regulations. It’s all based on a shared data model and collaboration across end to end technologies.

To me, there are three critical business oriented data operations: update a record, read a record and analyze a collection of records. There are also management operations: backup/archive, migration/recall and disaster recovery. I’m going to focus on the business oriented aspects for this post.

Let’s consider three different scenarios. A national intelligence operation that is processing satellite and other electronic information. A health care environment that processes medical records and data from medical devices. And a Criminal database containing wants, warrants and criminal records.

Each of these has a data ingest process that comes from individuals or individual devices. Satellite data is beamed to earth, typically to an x86 based server and then transmitted and loaded into a “System of Record” which might be considered the master database.

Medical records can be updated by a medical professional and patient via an end user device or portal and input can be received from medical devices e.g. EKG, MRI, XRay, etc. All of this information is then loaded into a master database.

BOLO’s (Be On the Look Out), Criminal Records, Wants and Warrants are input by various police agencies and transmitted to a master record database that can be accessed by other police departments to see if someone they’ve stopped or is in custody may be wanted by other jurisdictions.

Each of these scenarios has something special about them – a need to know. A doctor or nurse can’t “troll” a medical database looking for any data. That’s against HIPAA policy. They should only be looking at records associated with patients they are working with.

Intelligence analysts may only be able to see certain satellite or ELINT (electronic intelligence feeds) based on their security clearance.

Police in one jurisdiction cannot query or update records in other jurisdictions unless they are pre-approved for a particular case.

This Need to Know can also be called Compartmentalization or labeling of data. DB2 on z/OS has technology known as Multi-Level Security that allows data to be hidden from users and applications that don’t have a need to know. The best part of this technology is there doesn’t need to be any change to an application. The need to know criteria is established between security administrators and database administrators. As a result, when multiple users attempt to query an entire database, if they are in different compartments they’ll get completely different result sets without knowing the full breadth of the database.

So let’s look at a Medical System that has multiple hospitals scattered across a broad geography. Each hospital has specialty areas: Orthopedics, Oncologists, Pediatrics, etc. There is a Primary Care Physician (PCP) for each individual patient. There is the Patient. There are a bunch of different medical test devices: MRI, CTScan, EKG, etc.  At some point, a Patient Care Record is created. A PCP is identified to that Patient. They may order tests on behalf of the patient. Test data is captured, stored and linked to the patient’s record. A Cardiologist may see and annotate information associated with the EKG. Any other Cardiologist at the hospital may also see that EKG and annotate it. An Orthopedic surgeon may look at it, but not annotate it. A doctor at another hospital may not even know that the patient exists unless they are invited to look at it by a peer or via the patient requesting a second opinion.

The following diagram shows a Manufacturing business that gets a variety of “parts” from different suppliers. They allowed each of their suppliers to check the on hand inventory to allow for continuous manufacturing and improve the supply chain operations. The unintended consequence of this implementation was that each supplier could see another suppliers’ inventory and price per unit. As a result, devious suppliers could undercut the competition or worse, collude with their competitors to raise prices.

Commercial MLS

By turning on the labeled security capabilities of DB2, the suppliers can only see their records. Employees of the manufacturing company can see all the records in the database. No applications were changed. The manufacturing company had to collect some additional security information for each supplier in order for this to work properly. You’ll notice the inclusion of internet address (IP @) as a security context. The manufacturer can “force” supplier updates to come from the supplier’s site. It will not allow a supplier’s employees to logon from home, for example. This could help inhibit a rogue employee of the supplier from compromising the Manufacturer’s database.

There are other examples of production systems leveraging MLS capabilities. Lockheed Martin has been operating a secure environment for multiple agencies for many years. This has been for the National Geospatial Intelligence Agency (NGA) and its mission partners.

But here’s another important distinction from other models. The data operations are somewhat like the Eagle’s song Hotel California: 

“We are programmed to receive, You can check out anytime you like… but you can never leave”.

That means if you are viewing the data, you are viewing the “System of Record”. Where you are viewing from is called the “System of Engagement“.  By definition, the System of Engagement can overlay and complement the System of Record by transforming it. This is can be a  stateless entity or read only. The XRay image, stored in a database, is just a collection of bytes. The end user may not have the proper viewer installed on their desktop. The System of Engagement will transform the image into something consumable and recognizable by the end user. The Hospital doesn’t make a copy of the data and transmit it to other service providers. If they did, they’d have to ensure that those new data owners of the copy adhered to the same stringent privacy laws for which they are accountable. This becomes a logistics nightmare. Instead, the “user” being a patient or medical professional accesses a program (The System of Engagement), which could be a virtual desktop or web service, which in turn accesses the database and remotely presents the requested data to the end user. This is more of an image as the end user device is considered stateless. No local copy is saved. Because this is solely a remote presentation of the information, that device can be exempt from Privacy audits because it is understood that no local copy is made. This doesn’t address an end user taking a picture or writing down information associated with the record being processed. There are other products that can be deployed to capture these breaches of privacy policy. I can cover that in a later post.

Compartments can be created that contain only a subset of the stored records, similar to a view. So analytic processing might be done across all database records, looking for patterns, fraud, opportunities, etc, but without including protected personally identifiable information. For example, disease outbreaks by region, trends, risks, etc, but again, this analysis may only be done by someone with a need to know.

Look here for a video associated with Intelligence Analysts dealing with Satellite transmissions and leveraging this workflow.

SatelliteDemo

It may be hard for some of you, but imagine the different satellites are actually medical devices. Imagine the different compartments are associated with various hierarchies of users at a Corporate level, Branch/Region, Department and individual basis. Imagine the spatial data results of this video as demonstrated using Google Earth, are instead rendering views of XRays, CTScan’s, etc. Hopefully, this is a compelling view of the realm of possibilities. One thing that might appear contrary to what I described earlier is the fact that in this video,  various users know that a satellite exists when they didn’t have a need to know. Some satellites may be Top Secret, so unauthorized users have no need to know that that particular satellite even exists. To correct that situation vs what the video depicts,  if the System of Engagement had requested sign on by the user first, they would not have seen the entire list of satellite’s as that may have been excluded by an additional database query of accessible satellites for that user. However, when comparing to medical devices, there is no secret that there are multiple imaging devices, but the results may not be visible to a user, based on the need to know. Many, many options exist. These are just examples to get the discussion started about new possibilities.

There’s another important aspect of this. Any business with DB2 on z/OS already has the System of Record capability. There is a change in operations management required, but no additional software license charges required to implement this. Other platforms are required to separate data (aka copy it) to facilitate the ease of compartmentalization available on DB2 for z/OS. Analytics can be provided against this system of record locally, by products such as the IBM Data Analytics Accelerator (IDAA) or Veristorm’s zDoop, which is a Hadoop solution running on the mainframe. The mainframe is capable of meeting the service level agreements of both the updates and queries of the database with very large scale.

The Systems of Engagement may be Linux or Windows systems running on Virtual Desktops or PC servers, as well as within Linux for System z or z/OS application and transaction processing environments. The end user access could be from kiosks (thin client terminals), Smart Devices, PC’s or business specific devices e.g. Point of Sale, ATM, police cruiser access points, etc. These systems could be hosted in a public or private cloud. They could be part of an existing system infrastructure. Authentication and access control should be centrally managed across the entire operational infrastructure.

The net of all this is a couple of examples of hybrid computing and collaboration across systems that can dramatically reduce the complexity and improve the efficiency of end to end business processes. If you are still compartmentalizing operations by server silos you may have the unintended consequence of missing some dramatic cost savings or better stated, cost avoidance. Compartmentalization on a need to know basis may initially lead a business toward separation of duties and separation/copying of data. But with the capabilities described, it’s actually a form of consolidation and collaboration that enables a greater degree of sharing the System of Record. You might not have to spend more in systems deployment to solve some very complex problems. Happy programming!

Server Fiefdoms Inhibit Optimized Business Solutions

In the 1960’s, the IBM mainframe led a transition in business processing from a paper centric transaction processing environment to an IT centric processing environment. The combination of the Personal Computer and introduction of modems and later the Internet changed the IT community from being internally facing to customer centric computing. The introduction of the PC Server created “commodity centric” computing and, typically, folks running in that environment were against IT centric operations as the PC server could bring Department centric computing to individual business units.

The unintended consequence of all this was fiefdom’s were created to manage server silos. Over a decade of server deployments, individual IT organizations may have been created with business related names (e.g. Point of Sale org, Analytic org, Web hosting org, Claims administration org).  The reality is each of these organizations might be dependent on a specific server infrastructure.  As a result, the introduction of any other server infrastructure, for example mainframe to PC server or centralizing on a mainframe from UNIX or PC servers would be viewed as bad. The reality is no single server is capable of meeting all the IT needs of a business unless they are very, very small. And even then, multiple applications, which typically means, multiple server instances or operating system instances, will be required.

I am mainframe centric

I have no hesitation to say I am mainframe centric. That statement, alone and without context, will scare many people away from me as an IT consultant. One of the things I learned very early in my career is that Security of infrastructure is about People, Process and Technology. While the mainframe may be considered the most secure platform, technologically, my forensic experience at a variety of customers proves that poorly trained people and bad processes were the weakest links to security. But more important, much of that “poor security” happened at the end user device – formerly a PC, but now, including Smart devices, such as phones and tablets. If those devices aren’t secured with passwords and enterprise data residing on them isn’t encrypted, then they become the weakest link. And if the user of the device saves their userid and password in their browser so they can reconnect quickly, well so can the bad guy that steals their device and now, the bad guy has unfettered access to those “more secure” systems that execute transactions or provide data access on behalf of the end user that lost their device. I’ve spent over ten years looking at how back end systems can make the front end devices more secure. So I guess I am Security centric, as well. I’m also web, mobile and application development centric.

Most Application Developers are PC Centric.

If you started out as a mainframe programmer, you probably signed on to the mainframe with a 3270 emulator and used panel driven or command line driven tools to edit files and submit jobs to compile and execute the programs you created. The IT capacity that was used to do this type of development drove up the cost of operating the production mainframes.

The advent of the Personal Computer changed all that. Windows and Linux desktops provide graphic user interfaces. Fourth generation tools will help you graphically design the logic of an application and generate the source code in a variety of different programming languages that best suit the operations environment that you might run the program. With open system interfaces and  common programming languages, one development tool might create code to run on dozens of operating systems and hardware architectures. These are the type of tools used to build most middleware that is sold to run across “your favorite” operating system.

Well, that hybrid development environment didn’t end up as simple as that. Tiers of deployment platforms were created. If it was developed on a PC, then the first choice for a deployment platform was typically a PC server. Other platforms, like UNIX and the mainframe, were considered primarily as production platforms. They didn’t distinguish very well or price differently for developers. As a result, it became unaffordable to develop for a mainframe because the development group or a new Middleware vendor, couldn’t afford a mainframe or UNIX server to test their code, so again, by default, most new applications were targeted to PC servers.

Most Web Servers are PC Centric
Most Analytic Servers are PC Centric

Need I go on? A mantra for the client/server computing era was Move the Data to the Application. This led to copies of data everywhere, but also led to theft, loss, data breaches and server sprawl. Virtualization of server operating systems has helped to reduce server sprawl, but security remains complex. Business resilience, environmental needs (floor space, energy, cooling) and labor costs remain highly complex as well.

I said earlier that I am mainframe centric. But I can also say, unequivocally, that the mainframe can NEVER solve all of your business problems by itself. Why is that? Because it is blind and deaf. The 3270 terminal and punch card are long gone as input output devices. The modern mainframe requires a graphically enabled front end device, such as a Point of Sale device, ATM, PC or Smart Device. It still requires communications but now it leverages TCP/IP instead of SNA. So any business leveraging a mainframe is now a multi system business. Even the zEnterprise, with its introduction of the System z Bladecenter EXtension can’t solve all of a business’ problems because it doesn’t handle virtual desktop infrastructure nor manage the deployment of end user devices.

So let’s go back to solving business problems. We don’t need to discuss server types, but we can make some statements that should prove true, irrespective of server deployment model.

  1. Share data – the fewer copies of data, the easier to manage security and resilience. Sharing data for read/write access (transaction processing) along with read only access (Query and Analytics) will enable a combination of workflows that include real time analytics (fraud detection, co-selling) in a basic transaction.
  2. Move applications to data – copying applications is far easier and less time consuming, in addition to more secure and resilient, than moving data. Virtualization technologies enable a simple way to bring applications and data together in the same infrastructure and improve latency and simplify business resilience.
  3. Look for tortured data flows – there never will be a single copy of data, as there should be, at minimum, backup copies and disaster recovery copies. But if you can reduce the number of data moves, leveraging direct access to data, instead of file transfers or unload/reload workflows, a business can dramatically reduce operational complexity.
  4. The fewer parts (servers and data) the better – there will be less environmental costs, software license charges and reduction in complexity for security, capacity management and business resilience management.
  5. Use stateless devices/applications for end user connections – the end user wants direct access to data and transactions, but the less data stored on the end users’ device, the better. Cookies should be the limit of context stored on an end user device. That way, if the device is lost or stolen, no corporate data is lost. It will be stored centrally. This can be true of thin client computers as well as web access to a transaction processing environment.
  6. Never give developers direct access or copies of Production data – Development systems are generally not production systems. There is no logging, limited or no security and rarely an audit of critical data. This is the simplest target for a hacker or insider to attack in order to gain access to personally identifiable information or corporate jewels. Developers should have cleansed data through anonymization tools or other creations to ensure that the production environment remains protected.
  7. Measure availability end to end. I’ve seen server shops (of all types) claim four or five nines of availability for their infrastructure. That’s a nice internal measurement. If the end user, a consumer, is trying to access their data on a back end server and the security server, the web proxy server, a router or some other networking infrastructure is down then the business is down to them. Availability should be a business target not a server only target.
  8. Identify the security of users from end to end. When looking at transaction logs, a business should be able to see the identify of the individual that initiated a request. If the logs only identify the id of a down stream server, then additional logs and correlation is required from more expensive products to identify the end user. The more misdirection the greater the risk to security and data theft. Ensure that the originating user is easily identifiable at each step of the business workflow.
  9. Industry standard benchmarks are irrelevant to business workflows. They may help distinguish one component alternative from another. But since we’ve already determined that a hybrid environment will be required for production purposes, there are very few, if any, benchmarks that provide a true end to end workflow that mimics multiple business operations. Much like lawyers, benchmarks can offer guidance and may explain risks, but they are not the decision makers. The business application owner must weigh the total cost of operations and the incremental costs of maintenance for the entire business operations vs. just the component parts identified by a benchmark.

I first published “Porell’s Pointers” using Lotus Freelance for OS/2 circa 1990. The pointers covered then were server agnostic and remain true today. Sure, they were mainframe centric then and could be considered so today. But they make sense even if a UNIX system is the largest in your organization. They make sense if a PC server is the largest in your organization.

Published circa 1990 via Freelance Graphics for OS/2
Published circa 1990 via Freelance Graphics for OS/2

If you can follow these suggestions, but also think about the scale of operations needed for your business, consider the mainframe as a core component of your end to end workflow. Unlike most other servers, it excels at large scale, managing service level agreements (SLAs) for read/write and read only data, providing cross platform security infrastructure and managing availability and business resilience at an application level. The combination of z/OS, z/VM with Linux for System z, along with PC servers and desktop or Smart Devices will be a winning combination to satisfy the majority of your business problems. But once you look at a hybrid solution, then security, availability, disaster recovery, application development, analytics, capacity management and backup and archive should be cross platform or hybrid solutions as well. A wealth of middleware exists that can operate across platforms. This can be the beginning of a new operations model built on business problems rather than server specific domains. Damn the Server Fiefdoms! Full speed ahead with an organization that collaborates for shared business success.  Happy programming!

Apple and IBM Mobile solution deal could be a boon to z/OS as well

IBM has announced their strategy is around Big Data Analytics, Cloud and Mobile Computing. Apple has made an art of mobile computing through it’s iOS devices and App Stores. It’s clear that IBM wants to capitalize on that success and Apple wants to further entrench itself in the enterprise.

Going back in time, it’s not the first collaboration between IBM and Apple. The most famous relationship was Apple’s use of the PowerPC chip in earlier Macintosh computers.

Lessor known was a consortium, led by Oracle and including IBM, Apple, Sun, Alcatel and Netscape to promote the Network Computer Reference Profile, which would now be considered a Thin Client computer. This was circa 1996. This consortium was created to help attack the “lock” that Windows and Intel had on corporate desktops. The Network Computer was to be a less expensive alternative to PC’s and be centrally managed.
I had the pleasure of demonstrating a Thin Client connecting to an IBM mainframe at the launch of the reference profile at the Moscone Center in San Francisco. The original IBM thin client was developed with the AS/400 organization (now System i) in Rochester, MN. They really understood the value and importance of these devices as the 5250 was just recently declared a “dead technology” by IBM and the AS/400 needed a new connection device. I was enamored (and still am) by the ability of a thin client to provide the best type of access to a server, with graphics and multi-media support, but leaving the device as stateless. In other words, no data is downloaded to the device so there is less risk of loss or mis-use of corporate data. I worked closely with Apple to ensure they had a 3270 emulator available for businesses to connect via Macintosh desktops.

Ultimately, this consortium failed. The chips were too slow. It was before Linux was becoming ubiquitous for embedded devices. There were too many embedded home grown operating systems that weren’t extendable nor easy to update with the latest web browsers and terminal emulators. They were probably ahead of the curve. Now, Thin Client computers are affordable an an attractive alternative to PC’s. I’ll be writing about mainframe possibilities in a future post.

Back to IBM and Apple. It may seem odd, but there’s a link to IBM’s mainframe operating system, z/OS as well. z/OS began in the 1970’s as the MVS operating system. IBM sold a base operating system and then a collection of piece parts on top of it: storage management, communications, transaction processing, databases, compilers, job schedulers, etc. These products were individually purchased, managed and operated. There was quite a collection of support personnel required back then. In the 1990’s, along came OS/390 and now its successor, z/OS. The same products are now integrated into a single system. This simplified installation, customization, ordering and management. It reduced problem resolution and improved up time. It has been a predictable schedule from IBM that demonstrated business value in each release. In many cases, the cost for a release remained the same or less than the prior release, to ease adoption and provide the customer with a technology dividend.

Let’s compare that to the evolution of Apple’s MacOS. There’s an awful lot of function in a single operating system image as well. At the kernel level, it can process the bash shell and look a lot like a Linux system. There is a tremendous amount of middleware shipped at no additional charge with the system. Recently, I was trying to run FTP and some other basic, UNIX-like, commands on Microsoft Windows. I found that if the code was available, it was not comparable and would require buying additional software to get a comparable level of support to the Mac. At the end user interface, there is a tremendous amount of functionality. The installation of new applications is simple. The systems management, at a high level, is fairly simple, but can be very complicated when it needs to be for advanced users. Apple’s iOS mobile system inherits many of these characteristics as well, but on a different chip, the ARM chip rather than the  64 bit x86 Intel chip. The ARM chip is a topic for another day.

Both z/OS and MacOS get labeled as expensive, out of the box, when compared to other systems. Each suffer from a lack of understanding of the true cost of ownership for their respective hardware and software systems. Both have excellent security reputations. Both run at higher levels of availability than the systems with which they are being compared. Looking inside those statements, a business must recognize that there is certainly an element of technology applied to those values, but also people and processes. Neither system is hacker proof, nor truly fault tolerant. However, through their wise use of technology, including close collaboration between hardware and software development and automation of processes, each of these systems can handle difficulties or risks that other “comparable” systems cannot handle.  It’s always been hard to put a price on “down time” or avoiding planned and unplanned outages. While it’s true that some folks complain about the skills set necessary to operate a mainframe is “different” than other platforms, there are generally fewer people required to manage the aggregate amount of work running on a mainframe than the comparable amount of work running on other systems. The reality is people can learn to work with anything, as long as they are allowed.

One of the biggest differences between the IBM mainframe and Apple systems is that IBM no longer supplies the end user device necessary to operate and work with the mainframe. There is no “face” to the mainframe. Originally, the mainframe was accessed by a 3270 “green screen” command line terminal, or worse, punch cards. Now 3270 emulators are available on just about every “smart device” there is. The 3270 interface remains a boring command line interface out of the box. Graphics and multimedia support, as well as touch pads and speech recognition are now state of the art for end user computing. z/OS needs an upgrade in that arena. New technologies, such as the z/OS Management Facility, have built in web access to do a variety of functions. IBM and independent software vendors have taken advantage of this new capability to further improve and reduce the skills necessary for the management experience of z/OS.

There can be even greater synergy, in the areas of transaction pricing and application development.

Back in May, IBM announced new mobile workload pricing for z/OS. Given the volume of new mobile applications and the fact that many of them might be just a query (e.g. Account balance, order tracking), and not revenue generating to a business, IBM is offering discounted pricing and lower measured usage pricing for these types of transactions. This could be a significant benefit to existing customers and future transactions that are developed to exploit the mobile technologies. The measurement tool began shipping on June 30th.

The IBM Rational organization, responsible for application development tools, acquired Worklight, a product tailored to enable rapid deployment across a variety mobile computing operating systems with a single code base. Now integrated with the suite of Rational products, businesses can build applications that support back end transaction processing and database serving on the mainframe, while at the same time, providing new end user accesses and experiences from a wide range of smart computing devices, including phones, tablets and desktops.

The only thing to stop this integration of the best of both worlds might be a business itself. By isolating development, deployment and operational teams by platform silos e.g. mainframe vs. Unix. vs. PC’s vs. BYOD, a company could actually have the unintended consequence of blocking integration and overall improvement of their business’ end user experience both internally and for their customer’ experiences.

Here’s to hoping that the Apple – IBM mobile collaboration is significant enough to get businesses to open up their apertures so that true integration of systems can occur, from the end device all the way through to their business critical systems. The reality becomes the end to end workflow is their mission critical business, not just one server or device vs. another. When fewer systems are involved in the overall integration of business processes, a business should see benefits in their overall security, availability and easier compliance toward business initiatives. Happy programming.

Rocket Software is Making z/OS more Open

Twenty years ago, IBM introduced OpenEdition MVS, their first foray into “opening” the mainframe to a new community of developers. This release included the Shell and Utilities priced feature.  Production cost varied with the size of the mainframe. If you consider that only a handful of people might actually use this code when originally shipped, the “cost per seat” was astronomical compared to what was free or inexpensive on desktop systems.  This was corrected when IBM began shipping this feature as part of the base of the new OS/390 operating system. This dramatically reduced the cost and skills needed for new workload development on the mainframe for customers and vendors. But without the revenue associated with the previously priced feature, IBM didn’t keep up with the open source community and quickly, these tools fell behind. This was an unintended consequence.

Over the years, IBM worked to resolve this through relations with other companies and their own developers, but the net was the code was still aging, until they met with Rocket Software. Rocket has been in the business of supporting  mainframe customers for over twenty five years. IBM found that Rocket was using open source tools within their own z/OS development team. Given the gap in true “openness” for z/OS, Rocket decided to release their source modifications and z/OS binaries into the open source community. Through the Rocket web site, any business can download the z/OS binaries at no charge, just as they might do with Linux offerings. If a business is looking for support of those binaries, a fee offering is available, just as one might find from the paid Linux distro providers.

Rocket originally provided five ported tools as a trial last year. This month, Rocket has delivered over four times that number of tools. This re-opens the Unix System Services development environment of z/OS. This latest group of ported tools can be utilized to bring more open source middleware and utilities to z/OS, by customers, other vendors or Rocket Software. Rocket is working to provide a level of skills portability across platforms and ease the knowledge base required to create, build and operate on the mainframe, regardless of z/OS, Linux or z/VM operating system deployment. Rocket has also developed Application Lifecycle Management  for Linux on System z. This new offering is currently available as a beta offering. It’s goal is to provide greater management of Linux applications that are natively developed and managed on and from the mainframe.

Now, let’s dream how the new ported tools can be used on z/OS. Some basic items: make will help you take other open source code and get that built for z/OS.  If you are considering some of your own development activities on z/OS, cvs can be deployed as a source code library management tool. In every instance, it’s all about how the use of open source software can be integrated with existing applications and databases to create something new that’s better than a collection of software that runs across platforms. Websphere developers that work on Linux or Windows systems will find some of these new tools will add value and ease deployment and improve skills portability for building applications for z/OS. If you really want to go crazy, the Apache web server is now part of z/OS. Add in PHP and DB2 and you can have WordPress running on z/OS. Now why have WordPress? You might integrate directly into your business applications.

Rocket’s not done adding to this list. If you ask nicely, they might be willing to give you an update to bash – a shell program that’s common on all Linux and the MacOS system. In fact, if there are other tools that you are interested in, let them know via their contact site. The ported tools can be accessed here. The Application Management Lifecycle for Linux tool can be accessed by sending an email here. Happy programming.

Unintended Consequences

I’ve found that many times in my career, a decision that was made for one reason, had unintended consequences in another area. Sometimes, these were good things and sometimes, they were not. I’ve decided to write about some of these activities in this blog. So you’ll see this title, as a recurring theme throughout my writings.

Here’s a list of the items I’m thinking about writing. Let me know what you think is most interesting to you and I’ll try to get them done earlier than the others:

  1. z/OS “stabilizes” it’s Shell and Utilities offerings at very old code levels- Rocket Software “fixes” that.  Done.
  2. OS/390 and z/OS are a better package, but they lost their sales channel. Now Solution Editions and new workloads help to drag z/OS. TCO and High Availability remain king.
  3. Apple and IBM mobile deal is pretty cool, but reminds me that Apple MacOS and z/OS are a lot alike – tons of value in a single package  – Done
  4. Use of z/OS Unix System Services introduces “surrogate” security – which might end up giving too much authority to an individual – what can be done to reduce that risk.
  5. MVS and zVM might have been considered the first cloud platform, but no one originally marketed it that way. Now, ASG’s Cloudfactory provides an Amazon Web services like front end for z/OS workloads. Done
  6. The IBM Mainframe is advertised as hacker proof, but the weakest link is not the mainframe, it’s the end user interface and people using them. What can be done to help prevent problems? Use of Intellinx zWatch is one method that a wide range of customers use to prevent human errors across platforms.
  7. Application development on the mainframe wasn’t always as simple as it was before the IBM Rational products came along and the Unit Test feature was added, which is also known as  the zPDT . This was difficult to bring to market. For the first time,  IBM separated development pricing from production pricing.
  8. Linux is ported to S/390 in December 1999.  Novell is offered the opportunity to be the first vendor on Linux on S/390. They say no.
  9. Human Resource lessons learned in a 30+ year career.
  10. High availability lessons learned. It’s not always the technology, it’s the process.
  11. Multi Level Security – probably the answer to a lot of cloud sharing problems, but no one knows what it is or does. It’s in production in some very secure locations today. Done.
  12. Thin Client Computing and usage with Mainframes