All posts by Jim Porell

Can you reduce your operations budget by $2 million or more?

I’ve written earlier about silos of computing and the value of hybrid computing. Typically, any business with 250 server instances or more should be able to save or actually avoid $1,000,000 in IT expenses. Businesses with over 500 servers may be able to reduce budgets by over $2,000,000!  What’s the catch? A business or government agency must let a small team of qualified people analyze their current operational environment.

Reduced Complexity yields cost avoidance

The approach is simple enough. Take a look at complex or tortured data flows: Where do transactions begin and end? Where is that data copied to other systems for analysis or sharing with other departments or organizations. Our team of experts, using a proven method,  will look at how a business can share that data across multiple organizations. Instead of moving the data to the applications and users, we look at moving the applications to the data.

The result should be a  reduction in complexity. The added benefits should include improved security, business resilience and end to end response time.

It’s not really savings if you never spend the money

The analysis of the  current IT environment is a no charge service. We look at your current deployment model and costs and suggest a different deployment model that addresses sharing. Will it be disruptive? The goal is to keep it as simple and transparent as possible to end users – a businesses’ consumers and its own staff. It will change operational processing but it should be an easy transition for knowledgeable IT workers. It’s about cost avoidance, not savings. The business may need to spend some money to get to the new operational model. I say may because they might already have many of the components available today.  We expect a quick return on investment that results in an annual cost avoidance in IT operations.

Take a test drive

Sometimes, it’s hard to look at a spreadsheet and say, sure, I believe these figure.   As they say, figures don’t lie, liars figure. There is no subterfuge going on here. In fact, for many of the future states that get proposed, our team can  create a proof of concept demo of the end state. We’ve got a lab that has a wide variety of servers, operating systems and middleware installed.  Cloud, analytics and mobile computing are all in scope. Most important, a lot of skill and experience behind the operations of the lab.

The Take away

Our team will create documentation of the “As Is” state and the proposal. This can be used by the business to get funding from management, to create a Request for Proposal (RFP) or for anything else that benefits the organization.

In addition, we can offer continuing services on how to carry out our findings by working with the customer or third-party of choice.

This exercise can fail. But why?

The primary reason for failure is “organizational inefficiency”. Better stated, internal politics. I’ve written about silo’s of computing. Well, each silo has a leader. If a new operations model reduces the silos and shares operations and facilitates collaboration across organizations, it could scare some leaders that are attempting to protect their fiefdoms. History shows that true leadership is looking at the good of the business. Leaders that return profit to the business become better leaders in the long run. That’s the kind of business that will benefit from this service.

The business also needs to make their people available to answer questions about the current state. Yes, this might interfere with their current jobs, but it is a focused interview to reduce the time spent by those employees.

Get started on a simpler future

We’ve got qualified people ready to work with businesses wherever there is a need.  Feel free to contact me and we’ll begin working with you immediately. Happy programming.

 

If you started with computers in the 1990’s and hated mainframes – you were right

Wow, I said it. It’s hard to admit for some of my peers, but looking back, there is a lot of truth to that sentiment. I’m not talking about traditional transaction processing used for insurance claims, point of sale, ATM’s, travel reservations, etc. That’s where money is being made and continues to be made, with many businesses leveraging mainframes as their System of Record.

No, that’s meant for folks that were and are  interested in client/server deployments. Folks that wanted to use modern application development tools. Folks that wanted to incorporate multi-media and streaming into their workflows. These might be considered the System of Engagement today. Oh my goodness, the mainframe was not at all prepared to handle that work. Decisions were also made for departmental computers. Can you imagine a mainframe of that era in a closet? A closet? Where was the chilled water, the raised floor, the humongous air movers?

IBM_Main_9000

It was a traumatic change within IBM to begin the modernization of the platform and 1. keep it relevant to where it was having its “legacy” value (the prime objective) and 1A. make it relevant to capture new workloads. This was something tolerated, but not completely embraced at the IBM executive levels.  Ignoring the success of “desktop computing” and new business opportunities, I always felt that the mainframe sales mantra was kind of the opposite of the famous Star Trek line: To boldly go where we have already sold before. So the real goal appeared to be to keep our current mainframe customers happy.

The mainframe is a dinosaur. The mainframe is dead

There were voices, everywhere, saying the mainframe was dead, it’s a dinosaur. It chilled the hearts and minds of senior IBM executives too. There was a “new breed” of executive that wanted to look and grow new business lines, like the PC Server and Power Systems. Where did their funding come from? The profit of the mainframe. But as a result, mainframe R&D budgets were challenged. And even within the mainframe, as growth opportunities were considered, the development budget was spread even thinner across a wider variety of efforts. Some believed it was a cash cow, from which new opportunities should be funded. And then there were the mainframe believers that had to fight “the new status quo” to maintain their budgets. I could go on and on about these political battles inside IBM. There’s some TMZ quality stuff, but not what I want to discuss here.

Now, if you don’t want to read the story about what changes were made and how they were made, you can skip to the Summary of how bad it was and how good it has become.

The long road toward making the mainframe relevant to new workloads

Instead, I’d like to tell you about the changes that were made to make the platform relevant to new business opportunities. In the process, the positioning of the mainframe changed. It’s still a terrific System of Record, but now, unlike the 1990’s, it can be a viable System of Engagement for many functions, but not all. Because if I said all functions, I’d be lying every time I mentioned hybrid computing!

Let’s start with a big mainframe app that made the decision to go “all distributed” and never looked back. That hadas much to do with money as it did with technology.

Dassault never looks back. Goodbye mainframe

Dassault System’s CATIA application – a CAD/CAM – engineering modeling/rendering application.  They were charging $8K a user for the MVS version. Each user had a specially built 3270 Graphics Adaptor for model renderings on a “green screen”.2250Graphics

There were so many instructions required to do the renderings that each terminal had it’s own hardware interface. At the time, there were only 4096 interfaces (called Unit Control Blocks – UCB) on the system. As more engineers created modern airplanes, the customers had to buy more systems, which were not cheap to acquire, nor operate. Ultimately, the z/OS operating system eliminated the constraint on the number of UCBs, which would have greatly helped CATIA customers, but had the added benefit of helping all customers when Parallel Sysplex was introduced later.

CATIA grew bored with the proprietary 3270 GA box as their graphics rendering device. They wanted color and simpler graphics.CATIA-14X-actuators-135x90

This is just what the UNIX community was demonstrating. So they created a new product, based on AIX. They thought the UNIX based version should charge about $10K a seat. Some cooler graphics were worth the price, but not too far off the current customer price. They did a market study. They found other UNIX based engineering programs were getting about $25K a seat  And those systems were selling well. What a profit margin! So they decided to charge $22K a seat, in order to undercut their competitors.  They also yelled from the rooftops “the mainframe is dead” in order to transition to the new model, but more importantly, the new revenue and profit stream. What was surprising to me was that Dassault was partially owned by IBM. My job was to talk to the “chief yeller” and convince them of the “new mainframe strategy” which was still vapor ware, from their perspective. All requests to stay on the mainframe fell on deaf ears, and rightfully so, in hindsight.

SAP tells IBM what’s wrong with the mainframe

A “big mainframe” app that transitioned to commodity systems was from SAP, which was mainly a bunch of ex-IBMers that had a really good idea. SAP R/2  was created with a mainframe back end application and database server which also drove the green screen terminal front ends.
SAP R/3, the next major version,  was a “client/server” app that was all about the end user experience (GUI) and commodity server deployment models. It didn’t run on the mainframe when it first came out. SAP was one of the voices bragging about the “death of the mainframe” in order to transition customers to their new product. It took years of negotiations and modernization of the mainframe infrastructure and pricing changes for SAP R/3 to “return” to the mainframe, but it was only a partial return….The Database, entirely, and then the data access methods and some of the applications. The presentation layer has expanded beyond the desktop into mobile and web services. The original application programming language remains the same, though it has been augmented with Java.

Let’s peel the onion and look into modernization specifically for SAP R/3. In my opinion, it was that application, or better stated, that company that forced IBM to change the mainframe. IBM wanted new workloads. Here was a big one that took business away…Let’s win it back.

Network Performance and System Integrity

First, client/server was based on the internet and local area networks. They were rapidly transitioning to TCP/IP hosted networks – an open network. IBM was still stuck on SNA, it’s proprietary network that was actually a de facto industry standard as most systems could interoperate with it. Today, SNA applications can run over TCP/IP….it’s about layering, but that’s too detailed. An implementation of TCP/IP for MVS is created. Check the box. Done. Well, wait a minute…how well does it perform, SAP asks. It takes about 186,000 instructions to send a small chunk of data out and receive the acknowledgement of successful delivery. By contrast, it takes AIX about 18,000 instructions to do the same. So MVS is about 10 times slower communicating with TCP/IP than other platforms. Well, no it’s not. The SAP R/3 application is very chatty. The application server makes a network connection to the database server on the mainframe. For each end user request that ends up making a database call, there are actually 26 send/receive pairs for each transaction. So the overall pathlength is 260 times worse than AIX for a single SAP transaction. At the mainframe hardware and software price levels, that isn’t going to sell well. I also made a joke of this at the time. What’s Lassie, RinTinTin, Benji and TCP/IP on MVS?

Three movie stars and a dog. Drum roll please. This was a bad situation. Unfortunately, this joke lasted several years. A lot of effort went into looking at alternative implementations.

Anytime there is a network, data must move from one system to another system. In a terminal environment, such as a 3270 screen update, it might only be 100 bytes of data get transmitted. But a video could be millions of bytes. IBM, from it’s outset, has been historically great at creating architectures. Some architectures have layers of functionality. Communications or networking architectures are famous for the seven layer diagram,

Osi-Layer-Modelwhere each layer has an architectural purpose. Application layer, transport layer, session and network layers being a subset. IBM has an image processing application originally developed for medical records called Imageplus. The performance of image processing (and later streaming) over the network, because of the rigidity and adherence to the layers of architecture by the original code base, was horrendous. The physical image was COPIED TWELVE times, once for each layer and a few times within the application, as it traveled through the operating system. Eventually, the networking software was modified so that it could copy the data from the application buffers directly to the transmission wires after a quick test of the applications data integrity (aka make sure there would be no buffer overflows or attempt by an application to send system or privileged data that they didn’t have authority or “need to know”.  This data integrity testing, by the way, is inherent throughout the mainframe operating systems and middleware).

So the Imageplus benchmark was also critical to understanding how to re-write the TCP/IP stack. If the system knew the SAP Application server was running on an AIX server that was channel attached to the mainframe, the new performance was 5,000 instructions. (Didn’t need to do some network error handling  nor routing and this saved instructions). If the Application server was attached via a router, then it was 10,000 instructions.  Hallelujah….we were better than the rest of the world…..

Data Character Translation

Not yet, says SAP. Those desktops, those application servers on Windows and UNIX/AIX? They like to read and present data in ASCII format. Your mainframe and  DB2? They like to process data in EBCDIC format. It takes seven instructions to translate each byte. So all that network gain that was just achieved is lost by data translation. As a result, effort is put into the path length of data translation. Eventually, it gets down to three instructions per character. With the network savings, this is roughly equivalent, end to end of a commodity system performance. With price/performance, it’s still not good enough. As a result, the mainframe hardware, operating system and DB2 middleware are changed to natively support unicode characters, wide characters (for character based languages) and many other code pages. There was, and is, no longer a penalty for code conversion. Finally, after several years of work, SAP R/3 sales of hybrid computing began.

New workload pricing

But the transition isn’t complete. The introduction of Linux on System z leads one to believe that the mainframe is a viable application server. Not so fast, says SAP.  IBM charges software by the MIPS or capacity of the ENTIRE machine.  So if a customer adds one processor to run Linux on a mainframe that has 9 existing processors, the Linux license charge would be for 10 processors worth of work vs. the single processor that is executing. I’m going to rapidly jump ahead, but specialty engines were created such that new workloads, such as Linux, JAVA on z/OS, distributed connections to z/OS databases, some z/OS system utilities and more are charged by the actual engine or capacity of the workload vs. the entire capacity of the machine. Finally, incremental workloads could be added to the platform at or near commodity prices for the equivalent work. But we’re still not done with this evolution in hybrid computing.

Stop copying data. Share memory instead

Avoiding data moves was critical to the success of the “new” TCP/IP within z/OS. So next, the same concept is applied WITHIN the mainframe. And this takes several iterations with continuing benefit.

In a Blade server or  rack mounted system, a special back plane or Top of Rack switch may be deployed so that communications within a ‘server box’ can go point to point, ala the mainframe channel to channel interface and avoid router overhead. Under z/VM with hundreds of Linux images running and then z/VM to z/OS within the same physical server, IBM created the hipersocket which uses dedicated hardware memory, instead of wires, to facilitate intra-physical server communications. The most recent announcement upgraded the hipersocket technology to leverage RDMA memory that can be shared between server images so that only a pointer to the data is transmitted.  Each system can have direct access and eliminate intra-server copying of the data.

Increased capacity and memory available for new workloads

All of the above mentioned activities result in fewer data moves within the operating systems, virtualization layers and hardware servers themselves. This avoids instructions, real memory and latency required to do data moves. This allows the processors and memory to be available for other workloads. This allows greater scale of the environment. And because the mainframe has been doing this for over 40 years as a balanced architecture across instructions, memory, networks and data, instruction pipe lining, many levels of memory caching and more, it is capable of putting, colloquially, 10 pounds of “stuff” in a 5 pound bag without fear of breaking. Said another way, there is no fear in running the system at 100% utilization for very long periods of time.

The “Legacy” workloads get modern as well

So that’s just a snippet of the technology that went into “modernizing” the mainframe for new workloads. There is significant other infrastructure, such as the Parallel Sysplex, Geographically Dispersed Parallel Sysplex (GDPS), evolution from BiPolar to CMOS technology, reductions in cooling, electricity and floor space, packaging of “spare” parts in the box for fail over/fault avoidance and on demand deployment for capacity upgrades and more that made the mainframe better for traditional workloads, and over time, those benefits have applied to the new workloads as well.

But then, how much does it cost?

Historically, the mainframe has been considered an expensive alternative to “commodity” platforms that are “just good enough”.  Much has been done to change the pricing structures.

Technology Dividends

Since the introduction of OS/390, IBM and other vendors have been providing tech dividends such that the price/mips has decreased regularly and with most new processor introductions.

Capacity Backup

The “z” in System z, stands for Zero down time. While not reality, it is a goal, both at the technical level and through software license charges. IBM introduced Capacity BackUP (CBU) pricing for backup or disaster recovery servers. A fraction of the production price is paid for the hardware server and no software license charges are paid. Many other vendors have accepted this model as well. Why? Because a business isn’t getting any productive work out of the backup servers. When production work moves over to the CBU server, then the software license charges transfer to that machine. A business may do several “fail overs” a year, to test recovery operations without incurring a license charge. When compared to “commodity” servers, this can be a tremendous savings. In those environments, multiple servers may be required and each of those will be charged a production license fee.

Development pricing

If a business wants to get more workloads, then it needs to cater to application developers. Those developers need a sandbox or low priced system to create that code. Unfortunately, software license charges being as they are, all MIPS on the mainframe were created equal and treated as production. Rational was a brand acquired by IBM. They had created some fantastic tools that worked across platforms and now included OS/390 and later z/OS. Those applications would require CICS, IMS, DB2 and other vendor middleware as the target production environment. However, IBM was charging the same price for the production licenses. Most other vendors, including Microsoft, Sun, Oracle and HP were giving away the run times when they sold their development tools. As a result, the mainframe was not targeted by most vendors as a viable deployment platform because the “cost of entry” was way too high.

IBM finally released a mainframe architecture that ran on PC and Power servers – the z Personal Developer Tool (zPDT) to vendors. This created a competitive cost for vendor developers to target the mainframe as a production platform. It was several years later that IBM decided to make this available to businesses or customers. Finally, there was a common tool set, Rational Developer and free run times and test environments that were priced competitively with “desktop” development tools. Better yet, a developer could “take a mainframe home” with them, as the zPDT could run on a Thinkpad laptop.

Parallel Sysplex pricing

Clustering computers is an important way to scale. IBM invented the Parallel Sysplex architecture that accomplished three things, with two being a huge differentiator from “commodity” server clustering.

  1. Load balancing work across the cluster
  2. Shared Lock management of database and file records across the cluster
  3. Shared cache of the database and file records across the cluster, allowing all cluster members to have direct access to the data.

The direct access of shared locks and data cache allow additional servers to be added to the cluster without having to reorganization or partition the data. The performance is such that there is linear scalability as each new cluster member is added to a maximum of 32 systems.

Software was then discounted for customers that chose to use the Parallel Sysplex as a clustered model over a multi system model that had non shared data.

New workload pricing

This was discussed earlier. The net is that specialty engines were defined that allowed no software license charges or deeply discounted license charges for Java, Linux and other workloads.

Hybrid Computing pricing

With the zEnterprise servers, the mainframe added the ability to have direct attachment to an IBM Bladecenter and in the process, have a dedicated communications channel and a dedicated management channel for these two devices. I already described how performance can improve when channel attached when discussing the SAP R/3 deployment between AIX and z/OS.

The IBM Data Application Accelerator (IDAA) is a query server, running on x86 blades that can get direct access to DB2 on z/OS. By shipping queries over to the IDAA server, read/write operations can continue on z/OS while read only performance of the query can be parallelized across multiple x86 engines and see performance improvements that could be 300 to 1000 times better than if run on z/OS. More importantly, the IDAA is “just an engine”. Security of the database and audit remains within DB2 for z/OS. Data copies over the network can be avoided. Some copying is inevitable, but flash copies can be made in moments, instead of long running data extract, transform and loads (ETL) to other platforms. This provides a significant advantage to completing near real time analytics and enables new decisions to be made, prior to completion of a transaction. There are too many cost savings to mention, but some are: less disk space, faster response time, improved security, better scale, less network bandwidth consumed, etc.

Mobile pricing

In the 1960’s and continuing through the early 1980’s, it was agents of a business that executed transactions. Travel agents, Bank Tellers, Claims adminstrators, ATMs and Point of Sale terminals. The consumer watched. These transactions were typically short, easily measured and predictable. The majority of the transactions occurred during business hours, such as 9AM to 5PM. Hardware and software were priced for the size of the machine dictated the overall software license charges.

The “Internet of things” has changed that. Consumers can now do things on their own, where an agent was previously required. They can start a claim, transfer money, deposit a check, buy tickets to performances and for travel. They can do this at any time of the day or night. They can query prices as often as they wish, waiting for the “sale price” to be good enough to actually purchase something. This is dramatically grown the number of transactions being executed and kept the Systems of Engagement and Systems of Record up around the clock. Any down time means potential and real loss of business. In April of 2014, IBM introduced the Mobile price usage to provide a discount for these type of consumer issued transactions. The goal is to make the monthly pricing more predictable and comparable with “commodity” platforms.

Summary

How bad it was (1990’s)

Let’s review how bad the mainframe was back in the early 1990’s.

  1. Water cooled, very large, heavy servers with large amounts of electricity and cooling required.
  2. An Internet network that was horrendously slow at the hardware and software levels.
  3. A Green screen Command oriented interface (similar to the DOS Shell that lived a very short while on PCs). Yes, you could “screen scrape” to make it look more graphic, but many considered that “lipstick on a pig”.
  4. A communication architecture that was inherently inefficient as it copied data between system components, many times, before it went on “the wire”. In PC LAN terms, it was a Ring 4 implementation (and worse)  instead of Ring 0.
  5. The wrong character set was used: EBCDIC, which required software changes and data conversions to and from ASCII or Unicode.
  6. Expensive software licenses for production and development.
  7. Proprietary, green screen oriented, development tools
  8. Proprietary programming interfaces
  9. Lack of Commercial Off the Shelf (COTS) new workloads

How much better it is today

  1. Comparably sized servers to commodity systems that use less electricity and cooling.
  2. An incredibly faster Internet connection for inter-system, intra-system and intra-cluster communications.
  3. While the command line interface is still available, a web service oriented management interface is now available.
  4. A communication interface that passes pointers to data and shares the data rather than copying it within a system image, across virtual system images and across a cluster of systems.
  5. Adoption of Unicode, ASCII and EBCDIC as base characters simplifies data consolidation on z.
  6. Technology dividends and a wide variety of pricing options make the TCO and TCA of System z competitive with commodity servers
  7. Single, cross platform tooling from Rational that includes mainframes, UNIX/Linux, Windows and Mobile systems (via the Worklight acquisition)
  8. Portability of applications within z/OS and Linux in a variety of open languages
  9. A far broader range of COTS software is available for Linux on z and z/OS

Some additional items that are better than commodity servers

  1. Shared data access, with integrity, scale and resilience,  across systems
  2. Shared Analytics and Transaction Processing to a single database (that can be shared across a cluster), while maintaining Service Level Agreements
  3. Hacker resistant (not Hacker proof) architecture that inhibits data and buffer overlays. The System Integrity guarantee has been in place since 1973.
  4. Capacity Backup licensing and acquisition dramatically reduce Disaster Recovery costs and procedures.
  5. System z hardware avoids 80% of the errors that might occur in a PC server environment. No down time nor failover would occur. System z software and hardware work together to drive System availability to 99.999%.
  6. Workload management of thousands of applications and hundreds of thousands of client connections enables dramatic cost savings over alternative servers.
  7. Incremental additions of software workloads without the need to install new hardware due to on board “spares” available to be turned on, on demand.

Summary

If you haven’t considered a mainframe in the last 20 years, it’s quite understandable. But if you don’t start reconsidering it today, you are making a fundamental mistake.

The modern mainframe is greatly simplified from where I began 40 years ago. I’m happy to say I  may have had a little bit to do with that 😉  Happy programming.

Modern Data Usage Patterns – a case for Data consolidation

Last week, I wrote about organizational fiefdom’s and how they can inhibit efficiency in deployment models.

This week, I’ll describe a couple of data patterns that are common across several business models. Most important, they can take advantage of a hybrid deployment model and some unique System z characteristics that can result in a dramatic reduction in operational and security overhead and simplify compliance to a wide variety of government, industry and business regulations. It’s all based on a shared data model and collaboration across end to end technologies.

To me, there are three critical business oriented data operations: update a record, read a record and analyze a collection of records. There are also management operations: backup/archive, migration/recall and disaster recovery. I’m going to focus on the business oriented aspects for this post.

Let’s consider three different scenarios. A national intelligence operation that is processing satellite and other electronic information. A health care environment that processes medical records and data from medical devices. And a Criminal database containing wants, warrants and criminal records.

Each of these has a data ingest process that comes from individuals or individual devices. Satellite data is beamed to earth, typically to an x86 based server and then transmitted and loaded into a “System of Record” which might be considered the master database.

Medical records can be updated by a medical professional and patient via an end user device or portal and input can be received from medical devices e.g. EKG, MRI, XRay, etc. All of this information is then loaded into a master database.

BOLO’s (Be On the Look Out), Criminal Records, Wants and Warrants are input by various police agencies and transmitted to a master record database that can be accessed by other police departments to see if someone they’ve stopped or is in custody may be wanted by other jurisdictions.

Each of these scenarios has something special about them – a need to know. A doctor or nurse can’t “troll” a medical database looking for any data. That’s against HIPAA policy. They should only be looking at records associated with patients they are working with.

Intelligence analysts may only be able to see certain satellite or ELINT (electronic intelligence feeds) based on their security clearance.

Police in one jurisdiction cannot query or update records in other jurisdictions unless they are pre-approved for a particular case.

This Need to Know can also be called Compartmentalization or labeling of data. DB2 on z/OS has technology known as Multi-Level Security that allows data to be hidden from users and applications that don’t have a need to know. The best part of this technology is there doesn’t need to be any change to an application. The need to know criteria is established between security administrators and database administrators. As a result, when multiple users attempt to query an entire database, if they are in different compartments they’ll get completely different result sets without knowing the full breadth of the database.

So let’s look at a Medical System that has multiple hospitals scattered across a broad geography. Each hospital has specialty areas: Orthopedics, Oncologists, Pediatrics, etc. There is a Primary Care Physician (PCP) for each individual patient. There is the Patient. There are a bunch of different medical test devices: MRI, CTScan, EKG, etc.  At some point, a Patient Care Record is created. A PCP is identified to that Patient. They may order tests on behalf of the patient. Test data is captured, stored and linked to the patient’s record. A Cardiologist may see and annotate information associated with the EKG. Any other Cardiologist at the hospital may also see that EKG and annotate it. An Orthopedic surgeon may look at it, but not annotate it. A doctor at another hospital may not even know that the patient exists unless they are invited to look at it by a peer or via the patient requesting a second opinion.

The following diagram shows a Manufacturing business that gets a variety of “parts” from different suppliers. They allowed each of their suppliers to check the on hand inventory to allow for continuous manufacturing and improve the supply chain operations. The unintended consequence of this implementation was that each supplier could see another suppliers’ inventory and price per unit. As a result, devious suppliers could undercut the competition or worse, collude with their competitors to raise prices.

Commercial MLS

By turning on the labeled security capabilities of DB2, the suppliers can only see their records. Employees of the manufacturing company can see all the records in the database. No applications were changed. The manufacturing company had to collect some additional security information for each supplier in order for this to work properly. You’ll notice the inclusion of internet address (IP @) as a security context. The manufacturer can “force” supplier updates to come from the supplier’s site. It will not allow a supplier’s employees to logon from home, for example. This could help inhibit a rogue employee of the supplier from compromising the Manufacturer’s database.

There are other examples of production systems leveraging MLS capabilities. Lockheed Martin has been operating a secure environment for multiple agencies for many years. This has been for the National Geospatial Intelligence Agency (NGA) and its mission partners.

But here’s another important distinction from other models. The data operations are somewhat like the Eagle’s song Hotel California: 

“We are programmed to receive, You can check out anytime you like… but you can never leave”.

That means if you are viewing the data, you are viewing the “System of Record”. Where you are viewing from is called the “System of Engagement“.  By definition, the System of Engagement can overlay and complement the System of Record by transforming it. This is can be a  stateless entity or read only. The XRay image, stored in a database, is just a collection of bytes. The end user may not have the proper viewer installed on their desktop. The System of Engagement will transform the image into something consumable and recognizable by the end user. The Hospital doesn’t make a copy of the data and transmit it to other service providers. If they did, they’d have to ensure that those new data owners of the copy adhered to the same stringent privacy laws for which they are accountable. This becomes a logistics nightmare. Instead, the “user” being a patient or medical professional accesses a program (The System of Engagement), which could be a virtual desktop or web service, which in turn accesses the database and remotely presents the requested data to the end user. This is more of an image as the end user device is considered stateless. No local copy is saved. Because this is solely a remote presentation of the information, that device can be exempt from Privacy audits because it is understood that no local copy is made. This doesn’t address an end user taking a picture or writing down information associated with the record being processed. There are other products that can be deployed to capture these breaches of privacy policy. I can cover that in a later post.

Compartments can be created that contain only a subset of the stored records, similar to a view. So analytic processing might be done across all database records, looking for patterns, fraud, opportunities, etc, but without including protected personally identifiable information. For example, disease outbreaks by region, trends, risks, etc, but again, this analysis may only be done by someone with a need to know.

Look here for a video associated with Intelligence Analysts dealing with Satellite transmissions and leveraging this workflow.

SatelliteDemo

It may be hard for some of you, but imagine the different satellites are actually medical devices. Imagine the different compartments are associated with various hierarchies of users at a Corporate level, Branch/Region, Department and individual basis. Imagine the spatial data results of this video as demonstrated using Google Earth, are instead rendering views of XRays, CTScan’s, etc. Hopefully, this is a compelling view of the realm of possibilities. One thing that might appear contrary to what I described earlier is the fact that in this video,  various users know that a satellite exists when they didn’t have a need to know. Some satellites may be Top Secret, so unauthorized users have no need to know that that particular satellite even exists. To correct that situation vs what the video depicts,  if the System of Engagement had requested sign on by the user first, they would not have seen the entire list of satellite’s as that may have been excluded by an additional database query of accessible satellites for that user. However, when comparing to medical devices, there is no secret that there are multiple imaging devices, but the results may not be visible to a user, based on the need to know. Many, many options exist. These are just examples to get the discussion started about new possibilities.

There’s another important aspect of this. Any business with DB2 on z/OS already has the System of Record capability. There is a change in operations management required, but no additional software license charges required to implement this. Other platforms are required to separate data (aka copy it) to facilitate the ease of compartmentalization available on DB2 for z/OS. Analytics can be provided against this system of record locally, by products such as the IBM Data Analytics Accelerator (IDAA) or Veristorm’s zDoop, which is a Hadoop solution running on the mainframe. The mainframe is capable of meeting the service level agreements of both the updates and queries of the database with very large scale.

The Systems of Engagement may be Linux or Windows systems running on Virtual Desktops or PC servers, as well as within Linux for System z or z/OS application and transaction processing environments. The end user access could be from kiosks (thin client terminals), Smart Devices, PC’s or business specific devices e.g. Point of Sale, ATM, police cruiser access points, etc. These systems could be hosted in a public or private cloud. They could be part of an existing system infrastructure. Authentication and access control should be centrally managed across the entire operational infrastructure.

The net of all this is a couple of examples of hybrid computing and collaboration across systems that can dramatically reduce the complexity and improve the efficiency of end to end business processes. If you are still compartmentalizing operations by server silos you may have the unintended consequence of missing some dramatic cost savings or better stated, cost avoidance. Compartmentalization on a need to know basis may initially lead a business toward separation of duties and separation/copying of data. But with the capabilities described, it’s actually a form of consolidation and collaboration that enables a greater degree of sharing the System of Record. You might not have to spend more in systems deployment to solve some very complex problems. Happy programming!

Server Fiefdoms Inhibit Optimized Business Solutions

In the 1960’s, the IBM mainframe led a transition in business processing from a paper centric transaction processing environment to an IT centric processing environment. The combination of the Personal Computer and introduction of modems and later the Internet changed the IT community from being internally facing to customer centric computing. The introduction of the PC Server created “commodity centric” computing and, typically, folks running in that environment were against IT centric operations as the PC server could bring Department centric computing to individual business units.

The unintended consequence of all this was fiefdom’s were created to manage server silos. Over a decade of server deployments, individual IT organizations may have been created with business related names (e.g. Point of Sale org, Analytic org, Web hosting org, Claims administration org).  The reality is each of these organizations might be dependent on a specific server infrastructure.  As a result, the introduction of any other server infrastructure, for example mainframe to PC server or centralizing on a mainframe from UNIX or PC servers would be viewed as bad. The reality is no single server is capable of meeting all the IT needs of a business unless they are very, very small. And even then, multiple applications, which typically means, multiple server instances or operating system instances, will be required.

I am mainframe centric

I have no hesitation to say I am mainframe centric. That statement, alone and without context, will scare many people away from me as an IT consultant. One of the things I learned very early in my career is that Security of infrastructure is about People, Process and Technology. While the mainframe may be considered the most secure platform, technologically, my forensic experience at a variety of customers proves that poorly trained people and bad processes were the weakest links to security. But more important, much of that “poor security” happened at the end user device – formerly a PC, but now, including Smart devices, such as phones and tablets. If those devices aren’t secured with passwords and enterprise data residing on them isn’t encrypted, then they become the weakest link. And if the user of the device saves their userid and password in their browser so they can reconnect quickly, well so can the bad guy that steals their device and now, the bad guy has unfettered access to those “more secure” systems that execute transactions or provide data access on behalf of the end user that lost their device. I’ve spent over ten years looking at how back end systems can make the front end devices more secure. So I guess I am Security centric, as well. I’m also web, mobile and application development centric.

Most Application Developers are PC Centric.

If you started out as a mainframe programmer, you probably signed on to the mainframe with a 3270 emulator and used panel driven or command line driven tools to edit files and submit jobs to compile and execute the programs you created. The IT capacity that was used to do this type of development drove up the cost of operating the production mainframes.

The advent of the Personal Computer changed all that. Windows and Linux desktops provide graphic user interfaces. Fourth generation tools will help you graphically design the logic of an application and generate the source code in a variety of different programming languages that best suit the operations environment that you might run the program. With open system interfaces and  common programming languages, one development tool might create code to run on dozens of operating systems and hardware architectures. These are the type of tools used to build most middleware that is sold to run across “your favorite” operating system.

Well, that hybrid development environment didn’t end up as simple as that. Tiers of deployment platforms were created. If it was developed on a PC, then the first choice for a deployment platform was typically a PC server. Other platforms, like UNIX and the mainframe, were considered primarily as production platforms. They didn’t distinguish very well or price differently for developers. As a result, it became unaffordable to develop for a mainframe because the development group or a new Middleware vendor, couldn’t afford a mainframe or UNIX server to test their code, so again, by default, most new applications were targeted to PC servers.

Most Web Servers are PC Centric
Most Analytic Servers are PC Centric

Need I go on? A mantra for the client/server computing era was Move the Data to the Application. This led to copies of data everywhere, but also led to theft, loss, data breaches and server sprawl. Virtualization of server operating systems has helped to reduce server sprawl, but security remains complex. Business resilience, environmental needs (floor space, energy, cooling) and labor costs remain highly complex as well.

I said earlier that I am mainframe centric. But I can also say, unequivocally, that the mainframe can NEVER solve all of your business problems by itself. Why is that? Because it is blind and deaf. The 3270 terminal and punch card are long gone as input output devices. The modern mainframe requires a graphically enabled front end device, such as a Point of Sale device, ATM, PC or Smart Device. It still requires communications but now it leverages TCP/IP instead of SNA. So any business leveraging a mainframe is now a multi system business. Even the zEnterprise, with its introduction of the System z Bladecenter EXtension can’t solve all of a business’ problems because it doesn’t handle virtual desktop infrastructure nor manage the deployment of end user devices.

So let’s go back to solving business problems. We don’t need to discuss server types, but we can make some statements that should prove true, irrespective of server deployment model.

  1. Share data – the fewer copies of data, the easier to manage security and resilience. Sharing data for read/write access (transaction processing) along with read only access (Query and Analytics) will enable a combination of workflows that include real time analytics (fraud detection, co-selling) in a basic transaction.
  2. Move applications to data – copying applications is far easier and less time consuming, in addition to more secure and resilient, than moving data. Virtualization technologies enable a simple way to bring applications and data together in the same infrastructure and improve latency and simplify business resilience.
  3. Look for tortured data flows – there never will be a single copy of data, as there should be, at minimum, backup copies and disaster recovery copies. But if you can reduce the number of data moves, leveraging direct access to data, instead of file transfers or unload/reload workflows, a business can dramatically reduce operational complexity.
  4. The fewer parts (servers and data) the better – there will be less environmental costs, software license charges and reduction in complexity for security, capacity management and business resilience management.
  5. Use stateless devices/applications for end user connections – the end user wants direct access to data and transactions, but the less data stored on the end users’ device, the better. Cookies should be the limit of context stored on an end user device. That way, if the device is lost or stolen, no corporate data is lost. It will be stored centrally. This can be true of thin client computers as well as web access to a transaction processing environment.
  6. Never give developers direct access or copies of Production data – Development systems are generally not production systems. There is no logging, limited or no security and rarely an audit of critical data. This is the simplest target for a hacker or insider to attack in order to gain access to personally identifiable information or corporate jewels. Developers should have cleansed data through anonymization tools or other creations to ensure that the production environment remains protected.
  7. Measure availability end to end. I’ve seen server shops (of all types) claim four or five nines of availability for their infrastructure. That’s a nice internal measurement. If the end user, a consumer, is trying to access their data on a back end server and the security server, the web proxy server, a router or some other networking infrastructure is down then the business is down to them. Availability should be a business target not a server only target.
  8. Identify the security of users from end to end. When looking at transaction logs, a business should be able to see the identify of the individual that initiated a request. If the logs only identify the id of a down stream server, then additional logs and correlation is required from more expensive products to identify the end user. The more misdirection the greater the risk to security and data theft. Ensure that the originating user is easily identifiable at each step of the business workflow.
  9. Industry standard benchmarks are irrelevant to business workflows. They may help distinguish one component alternative from another. But since we’ve already determined that a hybrid environment will be required for production purposes, there are very few, if any, benchmarks that provide a true end to end workflow that mimics multiple business operations. Much like lawyers, benchmarks can offer guidance and may explain risks, but they are not the decision makers. The business application owner must weigh the total cost of operations and the incremental costs of maintenance for the entire business operations vs. just the component parts identified by a benchmark.

I first published “Porell’s Pointers” using Lotus Freelance for OS/2 circa 1990. The pointers covered then were server agnostic and remain true today. Sure, they were mainframe centric then and could be considered so today. But they make sense even if a UNIX system is the largest in your organization. They make sense if a PC server is the largest in your organization.

Published circa 1990 via Freelance Graphics for OS/2
Published circa 1990 via Freelance Graphics for OS/2

If you can follow these suggestions, but also think about the scale of operations needed for your business, consider the mainframe as a core component of your end to end workflow. Unlike most other servers, it excels at large scale, managing service level agreements (SLAs) for read/write and read only data, providing cross platform security infrastructure and managing availability and business resilience at an application level. The combination of z/OS, z/VM with Linux for System z, along with PC servers and desktop or Smart Devices will be a winning combination to satisfy the majority of your business problems. But once you look at a hybrid solution, then security, availability, disaster recovery, application development, analytics, capacity management and backup and archive should be cross platform or hybrid solutions as well. A wealth of middleware exists that can operate across platforms. This can be the beginning of a new operations model built on business problems rather than server specific domains. Damn the Server Fiefdoms! Full speed ahead with an organization that collaborates for shared business success.  Happy programming!

Apple and IBM Mobile solution deal could be a boon to z/OS as well

IBM has announced their strategy is around Big Data Analytics, Cloud and Mobile Computing. Apple has made an art of mobile computing through it’s iOS devices and App Stores. It’s clear that IBM wants to capitalize on that success and Apple wants to further entrench itself in the enterprise.

Going back in time, it’s not the first collaboration between IBM and Apple. The most famous relationship was Apple’s use of the PowerPC chip in earlier Macintosh computers.

Lessor known was a consortium, led by Oracle and including IBM, Apple, Sun, Alcatel and Netscape to promote the Network Computer Reference Profile, which would now be considered a Thin Client computer. This was circa 1996. This consortium was created to help attack the “lock” that Windows and Intel had on corporate desktops. The Network Computer was to be a less expensive alternative to PC’s and be centrally managed.
I had the pleasure of demonstrating a Thin Client connecting to an IBM mainframe at the launch of the reference profile at the Moscone Center in San Francisco. The original IBM thin client was developed with the AS/400 organization (now System i) in Rochester, MN. They really understood the value and importance of these devices as the 5250 was just recently declared a “dead technology” by IBM and the AS/400 needed a new connection device. I was enamored (and still am) by the ability of a thin client to provide the best type of access to a server, with graphics and multi-media support, but leaving the device as stateless. In other words, no data is downloaded to the device so there is less risk of loss or mis-use of corporate data. I worked closely with Apple to ensure they had a 3270 emulator available for businesses to connect via Macintosh desktops.

Ultimately, this consortium failed. The chips were too slow. It was before Linux was becoming ubiquitous for embedded devices. There were too many embedded home grown operating systems that weren’t extendable nor easy to update with the latest web browsers and terminal emulators. They were probably ahead of the curve. Now, Thin Client computers are affordable an an attractive alternative to PC’s. I’ll be writing about mainframe possibilities in a future post.

Back to IBM and Apple. It may seem odd, but there’s a link to IBM’s mainframe operating system, z/OS as well. z/OS began in the 1970’s as the MVS operating system. IBM sold a base operating system and then a collection of piece parts on top of it: storage management, communications, transaction processing, databases, compilers, job schedulers, etc. These products were individually purchased, managed and operated. There was quite a collection of support personnel required back then. In the 1990’s, along came OS/390 and now its successor, z/OS. The same products are now integrated into a single system. This simplified installation, customization, ordering and management. It reduced problem resolution and improved up time. It has been a predictable schedule from IBM that demonstrated business value in each release. In many cases, the cost for a release remained the same or less than the prior release, to ease adoption and provide the customer with a technology dividend.

Let’s compare that to the evolution of Apple’s MacOS. There’s an awful lot of function in a single operating system image as well. At the kernel level, it can process the bash shell and look a lot like a Linux system. There is a tremendous amount of middleware shipped at no additional charge with the system. Recently, I was trying to run FTP and some other basic, UNIX-like, commands on Microsoft Windows. I found that if the code was available, it was not comparable and would require buying additional software to get a comparable level of support to the Mac. At the end user interface, there is a tremendous amount of functionality. The installation of new applications is simple. The systems management, at a high level, is fairly simple, but can be very complicated when it needs to be for advanced users. Apple’s iOS mobile system inherits many of these characteristics as well, but on a different chip, the ARM chip rather than the  64 bit x86 Intel chip. The ARM chip is a topic for another day.

Both z/OS and MacOS get labeled as expensive, out of the box, when compared to other systems. Each suffer from a lack of understanding of the true cost of ownership for their respective hardware and software systems. Both have excellent security reputations. Both run at higher levels of availability than the systems with which they are being compared. Looking inside those statements, a business must recognize that there is certainly an element of technology applied to those values, but also people and processes. Neither system is hacker proof, nor truly fault tolerant. However, through their wise use of technology, including close collaboration between hardware and software development and automation of processes, each of these systems can handle difficulties or risks that other “comparable” systems cannot handle.  It’s always been hard to put a price on “down time” or avoiding planned and unplanned outages. While it’s true that some folks complain about the skills set necessary to operate a mainframe is “different” than other platforms, there are generally fewer people required to manage the aggregate amount of work running on a mainframe than the comparable amount of work running on other systems. The reality is people can learn to work with anything, as long as they are allowed.

One of the biggest differences between the IBM mainframe and Apple systems is that IBM no longer supplies the end user device necessary to operate and work with the mainframe. There is no “face” to the mainframe. Originally, the mainframe was accessed by a 3270 “green screen” command line terminal, or worse, punch cards. Now 3270 emulators are available on just about every “smart device” there is. The 3270 interface remains a boring command line interface out of the box. Graphics and multimedia support, as well as touch pads and speech recognition are now state of the art for end user computing. z/OS needs an upgrade in that arena. New technologies, such as the z/OS Management Facility, have built in web access to do a variety of functions. IBM and independent software vendors have taken advantage of this new capability to further improve and reduce the skills necessary for the management experience of z/OS.

There can be even greater synergy, in the areas of transaction pricing and application development.

Back in May, IBM announced new mobile workload pricing for z/OS. Given the volume of new mobile applications and the fact that many of them might be just a query (e.g. Account balance, order tracking), and not revenue generating to a business, IBM is offering discounted pricing and lower measured usage pricing for these types of transactions. This could be a significant benefit to existing customers and future transactions that are developed to exploit the mobile technologies. The measurement tool began shipping on June 30th.

The IBM Rational organization, responsible for application development tools, acquired Worklight, a product tailored to enable rapid deployment across a variety mobile computing operating systems with a single code base. Now integrated with the suite of Rational products, businesses can build applications that support back end transaction processing and database serving on the mainframe, while at the same time, providing new end user accesses and experiences from a wide range of smart computing devices, including phones, tablets and desktops.

The only thing to stop this integration of the best of both worlds might be a business itself. By isolating development, deployment and operational teams by platform silos e.g. mainframe vs. Unix. vs. PC’s vs. BYOD, a company could actually have the unintended consequence of blocking integration and overall improvement of their business’ end user experience both internally and for their customer’ experiences.

Here’s to hoping that the Apple – IBM mobile collaboration is significant enough to get businesses to open up their apertures so that true integration of systems can occur, from the end device all the way through to their business critical systems. The reality becomes the end to end workflow is their mission critical business, not just one server or device vs. another. When fewer systems are involved in the overall integration of business processes, a business should see benefits in their overall security, availability and easier compliance toward business initiatives. Happy programming.