Tag Archives: business resilience

Miraculous cure for IT system bottlenecks!

What’s a bottleneck? From Dictionary.com, it’s “a narrow entrance, spot where traffic becomes congested”. In IT terms, it’s something causing slower operations or that inhibits a Service Level Agreement (SLA) from being met. The worst case scenario is a lot of IT shops are absolutely confident that they don’t have bottlenecks as they are meeting or exceeding their SLA’s. They couldn’t be more wrong!!! 

There are a wide variety of traditional methods for identifying bottlenecks. On an IBM mainframe, a business might use IBM’s Omegamon, BMC’s Mainview or CA’s SYSVIEW. On a desktop, it could be as simple as Microsoft Task Manager or Apple’s Activity Monitor. On networks, there are a many tools. At home, you might wonder if your ISP or internal network is running well, so you’d try Ookla’s speedtest.net. In the cloud, there are monitors for Amazon Web Services, IBM Bluemix, Microsoft Azure and Google Cloud.

Yet, none of these will find the modern IT system bottleneck. When you have an IT system bottleneck, there’s always someone to blame. But who is it? Is it the System Programmer’s fault? Is it the Application Developer’s fault? Is it the asphalt? Oops, wrong punchline. No, it’s the System Architecture’s fault. It’s a 1990’s mentality that looks at IT in operational silo’s and independently manages the systems. But hang in there for another moment. There is a cure.

The 1990’s methodology bases IT operations on server silos. The mainframe is independently managed from the Unix servers, which are independent of x86 servers, which are separate from cloud and mobile and desktop and network. Security is done for each domain. Business resilience is done for each domain. Budget’s are created and departments compete for more spend in their particular area. Some areas might claim they have a bottleneck and warrant more spending to resolve it. Next budget cycle, they’ll still have issues and want more.

Another type of silo-ed operation is looking at separate systems for Record, Insight and Engagement. Systems of Record are the master database and transactional systems that update those databases (e.g. credit/debit, stock sales, claims, inventory, payments, etc). Systems of Insight are the analytic systems (e.g fraud detection, sales opportunity, continuous flow delivery, tracking). Systems of Engagement are the human computer or Internet of Things (IoT) interfaces (e.g. mobile, IoT device, tablet, browser). Many businesses create silos to manage each of these areas independently because if you had ever tried to do this in the 1990’s, you’d hit a bottleneck or drive up IT costs too high. Funny how the systems of the 1990’s actually created the hidden bottleneck today!  But it can be fixed.

Where can you buy the “fix” for this? Is it via a software product? No. Hardware product? No. Cloud? No. Consulting services? Maybe. But the reality is every business can solve this pretty easily within their own environment. I guarantee that your business can far exceed current SLA’s and establish new business goals. In the process, your business can save tremendously in IT expense, while improving security and business resilience. The solution is pretty simple.

Stop copying data between systems! In the new API economy, all of the systems have been modified to allow for direct access to applications and data from other systems. The change is either philosophical and/or organizational for most enterprises. It’s all about managing the IT systems together instead of separate silos. That starts at an architectural level, with hybrid development systems and extends to hybrid operational systems that address end to end security, business resilience and performance.

If you’ve moved  data to another server to keep the Systems of Record separate from the Systems of Insight. Stop the move. Keep the data together. Systems like IBM’s mainframe are now capable of hosting both databases and analytics in a single system and improving analytic performance many times over separate Systems of Insight without impacting the SLA’s of the transactional systems. The applications  that access the Systems of Insight can be easily modified to point to the Systems of Record instead via updated device drivers without changing any code logic. This changes things like batch analytics, which might be used for fraud detection into real time analytics that can be used for fraud prevention. And in the process, businesses will save with reduction in storage, network bandwidth and system utilization, costs and time associated with copying the data. Products such as Rocket’s Data Virtualization Studio can provide the device drivers and mappings necessary for applications to share data from a variety of Systems of Record, across platforms. And new apps can be developed to join the data from different sources, including partner organizations or from “the cloud” to solve business problems in new and creative ways. These applications wouldn’t be possible without sharing data. Apache Spark technology is one means for collaboration across data sources.

There is no reason to copy data to move it closer to or tailor it for a specific System of Engagement. The API economy allows for applications to directly access the data or transactions on other systems via the API economy. New pricing options are available that allow for increased transaction rates, due to direct access to mobile, at a lower cost than traditional access methods. zOS Connect is one of the tools for making the API connection between mobile and transactional systems.

Regardless of how you might transform your business, the unintended consequence of standing still on current IT silo-ed operations is there are bottlenecks and slow downs in business systems that depend on heavily copying data and batch windows to facilitate copying. Direct access to data and devices is the future. The future is now. Begin the migration to hybrid operations management. If you need help in deciding how to look at your architecture differently, don’t hesitate to ask me.

 

 

Advertisements

Closing the gap on technology evolution

I recently saw a blog post by one of the Federal CIO’s. I can’t argue with their observations, though I think we may disagree on how to tackle the problem. That CIO is going to post their direction in future posts. I’m going to take a shot at my own direction in this post.

The following graph demonstrates that the US Government IT is falling behind Fortune 500 firms and way behind internet startups.
IT Curve acceptance
Federal CIO study graph

I remember having this debate with an IBM General Manager years ago when he was considering outsourcing some operating system components thinking that all programmers are created equal. There is a huge difference in maintaining a legacy of millions of lines of code vs. starting from scratch with something new. As important, starting over AND maintaining all the rules and regulations of the legacy, is also a very difficult proposition. It takes pre-existing knowledge for success.

This CIO faces a problem that is similar to many other businesses. It’s true for mainframes as it will be for Microsoft Windows and Linux systems in the future. There are millions of lines of “legacy code” in languages that are less popular today than they will be in the future. The inference is to move away from the legacy code toward a modern language where there are more skills available. As a factoid, there are more ARM chips in the market today than Intel chips. There are more applications being developed for iOS and Android than for Microsoft Windows and that’s way more than being developed for mainframes. So that might lead someone to believe that’s the programming model of this generation.  And as I’ve said in an earlier post, if your IT career began in the 1990’s and you hated mainframes, you were right….at that time….

But like everything, time changes things. IBM and vendor partners have dramatically changed what the mainframe was into a more modern computing environment. IBM spends over $1B in R&D for each generation of the mainframe that comes out about every two years now. I’m going to park that, for a moment, to go to another topic, that is more relevant to the skills discussion.

Patterns

Programming is about patterns. Patterns occur at a process level, in languages and in behaviors. There are three broader patterns at work here. Systems of Record, Engagement and Insight. I’ve written about that before, but Record deals with transaction processing, Engagement deals with the end user interface and Insight is about analytics. Most programming being done today is around systems of engagement – taking advantage of enhancements in smart phone, wearable tech (e.g. watches and fitness) and other devices that are the Internet of Things. GPS, accelerometer, touch, voice and biometrics are just a few of the advances that improve the human computer interface. The mainframe has avoided this programming area completely as a native interface. That makes complete sense. Ignored by many, though, is the fact that the mainframe has fully embraced leveraging those capabilities through interoperability and standard formats and protocols. They enable hybrid programming to reach out to those interfaces to simplify the deployment of systems of Record. In addition, they’ve integrated with Systems of Insight to enable real time analytics to be applied to traditional systems of Record to reduce risk.

This link will take you to a tremendous video about the z13 server and its ability to satisfy these new capabilities. Warning – it’s 30+ minutes long.

Where will the skills come from?

Another fear raised is that schools no longer teach “mainframe”. Perish the thought. While there are fewer “mainframe” schools than teach commodity system programming, there are a wealth of schools across the world that are part of IBM’s System z Academic Initiative. Checking their website, there are three in Maryland, close to the Federal government and very close to the agency head writing the blog. But you know, “you can’t trust the marketing” materials put out by a vendor. So I went to the Loyola College of Maryland, University of Maryland Eastern Shore (UMES) and Prince George County Community College web sites to see what they said about the IBM Academic Initiative. Honestly, the info I found was from 2011-13, other than Prince George which was up to date. So I reached out to the schools. UMES responded quickly.
“First and foremost, I would like to inform you that we are actively involved in the IBM Academic Initiative. Dr. Robert Johnson is the Chair of the Department of Mathematics and Computer Science is the lead person in the initiative. Further, they are currently in the process in moving into our new $100 million Engineering and Aviation Science Building which will significantly enhance our capabilities to support the initiative.”
Here’s a brochure for their program.

Most importantly, success is not a two-way street between IBM and the schools. It’s four way, including businesses/agencies and the students. The best schools will work with businesses to provide internships with students PRIOR to graduation. There is generally a very high (close to 50%) success rate in those students choosing full-time employment at the business they did an internship. I strongly encourage any business or agency concerned about future skills deployment to reach out to these schools and work directly with them. Experience shows that you’ll be very pleased with the results. UMES gave me their cell numbers if you’d like to reach out to me for a direct introduction.

Adopt New Technologies and dump the old?

The collective wisdom of the Federal CIO’s seems to point to new technologies as the “future” of programming. The referenced blog points to Uber, Siri and Facebook as examples of such applications and suggests they may be irrelevant in five years. (See Myspace as an example). New technologies grow up in a vacuum. There is no maintenance legacy. It doesn’t mean the legacy can’t work with them, though. A prior blog entry looks at 22 emerging technologies and their relationship to the mainframe and how hybrid computing can solve new business problems.

Let’s consider one of the new, cool tech referenced: Uber. I happen to have a chauffeur’s license (a story for another time) and am very familiar and active with Livery legislation. The Uber mobile application is actually very simple and easy to recreate. What makes them successful is their business model and practices. They hire drivers as contractors, therefore no tax consequences for Uber. They avoid the bureaucracy of Livery laws.

There is a state law that enables the New York City Taxi and Livery Commission (T&LC) to regulate who and what can be operated within the boroughs. This is for the “safety and comfort of passengers”. However, it’s big money. Medallions, per cab, have cost up to $750,000 just to put a car on the street and the T&LC limits the number of medallions. Cars from outside the T&LC are not allowed to make more than one stop in the city. They cannot pick up a passenger at an airport if they dropped them off more than 24 hours ago. The T&LC have 250+ officers in unmarked vehicles that follow and intimidate non-T&LC livery vehicles in the city. I witnessed a stretch limo being impounded by the T&LC when an upstate Livery firm dropped off the passengers returning from a NYC funeral at a NYC restaurant before traveling north. The second stop was illegal. In any event, other states (CT and NJ) got upset with this bureaucracy. They lobbied and a Federal law resulted to allow reciprocal rights to other states to operate without joining the T&LC. But upstate Livery can’t participate. The NY Assembly and Senate have had to modify laws to create T&LC’s in neighboring jurisdictions to allow reciprocal rights in NYC locations. Rockland, Nassau and Westchester counties have T&LC’s now. This is the third year that Dutchess and Ulster have legislation to enable reciprocal rights up for a vote. The NY Assembly has passed their legislation, but the NY Senate hasn’t. Last year, they decided to wait on Dutchess and Ulster until they figured out how to allow Uber  and Lyft to operate in NYC exempt from the T&LC bureaucracy. That legislation has now been created and will be voted on soon.

T&LC makes revenue on selling taxi medallions and collecting tax on fares. Uber & Lyft disrupt those economics. The livery vehicles pay $3000 per year for insurance. Uber/Lyft cut deals with insurance companies to lower that to $600/year to make them more competitive. The drivers must also have personal insurance on the cars when a fare isn’t present.  Laws are now being enacted to allow “Transportation Network Companies”  (TNC as they generically refer to Uber and Lyft) to get “fair access” to markets in NY without this bureaucracy. I’ve developed an app which will qualify the “local” livery company to operate as a TNC to reduce their costs and in turn, reduce the cost to consumers…will the government allow that? Will the Dutchess and Ulster laws pass? This is more about big money, venture capital and paid lobbyist getting to the legislative leaders, than the small livery companies trying to stay in business. We’ll see if the legislation and the bureaucracy will enable the small livery services to morph into a mini-Uber. The legislation enables the Commissioners of Insurance and Motor Vehicles to regulate the “TNC” businesses. The legislation doesn’t prescribe how that will be managed nor how much it will cost. By the way, did you notice that the legislation for Uber includes a lighted icon in the front and rear of the car to identify it? That’s as much for passenger safety as it is to make it easier for the T&LC police to pull over the cars if the legislation doesn’t pass. Not much likelihood of that, though, given the amount of money changing hands in Albany.

Long story short – Uber is more about business processes than it is about new applications.

Past Technology Evolution Examples

Going back to the graph, there is much to learn from prior experiences of the Fortune 500 and government agencies introducing new technology.

Learn from the Fortune 500 – the good:

Benefits processing: Hewitt Assoc and Fidelity continuously advance their capabilities. They provide integration with employer payroll systems. They have up to the minute accuracy of consumer records. They provide immediate access to Accruals and eligibility. They’ve adopted web and mobile technologies as Systems of Engagement, including biometric security authentication.

Claims processing: Travelers Insurance has historically reduced IT and people expense 10% annually while improving response times. Claims agents leverage mobile technology for accidents and disasters as input to “legacy” systems.

Learn from the government – the good:

The FBI and VA leverage mainframe virtualization to avoid IT costs of millions of dollars over commodity systems, while improving security, resilience and service level agreements. They run the same code in a different container with a superior operations model and lower costs.

All of the above use Hybrid technology which includes the mainframe.

Learn from the government – the bad:

Marine Corps – hosted by an IT supplier that gouges them on mainframe costs – three times the amount if they hosted it themselves. The IT supplier takes floor space, energy and cooling costs for an entire data center and only bills to the mainframe users. The IT group claims: Commodity systems wouldn’t be affordable if they were “taxed” with those costs. That’s why understanding the Total Cost of Ownership is a critical success factor when considering mainframe vs. commodity system costs. Unfortunately, regulations are in place that mandate that the Marine Corps use that particular IT Supplier. Other groups have bucked that policy to save money.

US Postal Service was not competitive with package tracking vs UPS and FedEx. They realized they needed to add new applications and wanted modern programming to do it. It included new engagement systems at the delivery vehicles via mobile technology. ….that’s the good. The bad – they spent $100’s of millions on redundant “commodity” IT infrastructure and copied key data and applications from the mainframe in order to host the new applications, while leaving the mainframe running. Testing and benchmarking have demonstrated that adding the new applications to the existing mainframes would have avoided millions in costs and operations complexity, while simplifying the architecture and improving SLA’s. With package shipping volumes increasing annually, they’ve continued to upgrade the mainframe each year. They are just spending too much overall. While they collaborate between the systems by moving data, they could save more if they shared the data in real-time.

Prescription for change

While a prescription for change is forth coming in the CIO’s future blogs, let’s hypothesize some changes for their benefit.

Modernization of the development environment

Rational tools – They move the mainframe application development to commodity systems. This moves 80% of the development off the mainframe to reduce IT costs. They provide tools to modernize and document the “legacy” applications and simplify their maintenance. They provide seamless test to the mainframe and other platforms of deployment choice. One large business has 1000 Java developers for commodity systems, 400 Cobol programmers for the mainframe and 50 developers familiar with Java and Cobol to enable hybrid programming and integration. All use the same Rational development front end. From a skills perspective, the mainframe development can now look and feel exactly the same as development on commodity systems. This eases the skills and knowledge requirements to start.

Language modernization:

Cobol Copybooks – the means to define data structures – are now sharable with web services and those services  can launch from Cobol. More on that in a moment.

Chip Speed

The System z13 server runs dual core 5GHz processors. Benchmarks show that Java runs faster here than any other platform. The video referenced earlier provides specifics. With direct access to databases and files, business applications can have better performance than other architectures. With fault tolerance and an improved hardware and software security architecture, the result is a very price competitive hosting environment for new workloads.

Risk and Fraud analytics

Financial services businesses are doing real-time analytics in the middle of their System of Record transaction programs to assess risk and avoid fraud. Leveraging the Copybook capability, they can call out to leverage the 1000+ processors in the IBM Data Analytics Accelerator (IDAA – formerly Netezza) that have been tied into the mainframe to speed time to resolve.

Callsign – a biometric authentication and fraud prevention technology, can leverage a modern smart phone to identify the owner/user of the device before they actually answer a challenge – which could be a finger print, facial recognition or voice. Using the accelerometer in the phone, the GPS and pressure points on the touch pad, along with historic behavior patterns, Callsign can tell by the way a person is holding a phone if it’s the original user or someone else before offering them the authentication challenge. This type of technology can be used at kiosks in regional/branch offices to enroll users and make sure they are the real person requesting later service. No need for a card. A unique user id is sufficient to provide authentication. True, many low-income users/beneficiaries may not have smart phone capability. Alternative mechanisms can be deployed for challenge/response authentication. But, maybe providing a low-cost device to beneficiaries for this purpose, a more modern version of the “RSA token devices”,  might reduce overall costs for low-income users. Watch this space. One of the Callsign customers, a large credit card processing bank, is calling out to Callsign from a “legacy” mainframe transaction program to authenticate that the real customer is at the point of sale or ATM device requesting service. Compare that to an experience I had recently. Visiting 500 miles from home, I went to a big box department store and paid with a valid credit card. Everything was good, but the transaction was denied. I then used a debit card, same bank, same credit card service, but used my pin code. The transaction was approved. As I walked out of the store, I got a call from the credit card provider asking me if I just attempted to use the card. They restored my card to service immediately. Use of the Callsign capability eliminates the human intervention, lowers my embarrassment and speeds transaction processing.

Going a step further, Callsign runs on Amazon Web Services (AWS) or a private cloud today. This is a distributed connection to the transaction systems calling out to it. There are about 15 “risk tests” that can be done, but typically just three can be done and the results fed back to make a risk decision in the time allowed for a transaction to complete. We’ve hypothesized that if Callsign was running on a mainframe, with a memory connection to the transaction programs, that 10 risk tests could be done on the mainframe and maintain the service level agreement of the “legacy” transaction programs. Stay tuned for future updates in this pace.

The NSA has proven that leveraging a Google like search capability can help stop attacks. Why not use web crawling software to look for fraud and overpayments? Leveraging online obituary information, an insurance company or benefits providers could determine if a person has died and no longer eligible for services. In addition, it can predict the services that may be available to the survivors of that person. This can speed up time to deploy payments to their survivors. These web crawlers can feed a data warehouse searching for fraud but also feed real-time systems to avoid fraud for new transactions.

Collaboration is necessary to move forward:

Education: partnerships between vendors, businesses/agencies and schools is necessary to create the next generation of IT professionals (programmers and operations) as well as to update the skills of existing personnel.

Operations: Today, fiefdoms around individual architectures or administrative domains exist that create/foster conflict and drive up IT costs. Not everyone is going to get along. Organizational politics and budgets have as much to do with fiefdoms as anything. Leveraging the Rational developer example, where a small group of people have some hybrid responsibility, can lead to breakthroughs in processing schemes.

Legislation: Where necessary, this can be valuable to enable a leap toward something new that will provide value and reduce costs.

Summary

There is no right or perfect answer to any IT decision. As the saying goes and leading to an unintended consequence: “Throwing the baby out with the bathwater” isn’t necessarily a good approach. Leveraging a hybrid computing, operational and development environment can make a large shift toward leveraging “modern” application models. Happy programming!

What happens after a breach? The vultures descend

There have been so many breaches. In every case, the business or agency affected realizes that they must spend money to fix the breach. That’s when the vendor sales teams come out of the woodwork. Everyone has something to sell. New analytics, new detection mechanisms and new management offerings are just some of the products. However, in almost every case, a quick decision on a new product would be like putting lipstick on a pig. At the heart of a breach is a fundamental problem with people, process and technology associated with security. While a witch hunt for the base problem may be happening, it’s important to take a step back, take stock of what’s good and bad about what is already in place. Re-look at processes and find the gaps that need to be considered. But most important, what is the scope of the processes?

Too many systems to manage securely

Too often, a business will have multiple domains that are independently managed. For example, there may be separate domains for management of desktops, web servers, application servers, data warehouses, transaction servers and database servers. My experience has shown that when a breach is found in one area, the other areas breathe a sigh of relief as it is not their problem. That’s a bad attitude. Business problems are end to end solutions that cross several of these domains. As such, a business should be looking to collaborate their security and harden processes across domains rather than manage them individually.

Create an Enterprise Security Hub

The IBM mainframe is an ideal hub for centralization of security focus. For the same reasons that IBM calls the mainframe the System z, z being for zero down time, it could have been  System s for fail safe security. IBM has spent years in hardware and software R&D to harden the mainframe for business resilience and security and include that level of functionality in the basic hardware and software systems. The bulk of the built-in security services meet industry standards for interoperability and programming interfaces. As a result, these services can be executed on behalf of any other system or server that is interconnected with them. This includes usage as an authentication server, managing logs, providing real-time analytics to prevent loss and a central site for audit management. Unfortunately, no sales person is going to run to a business to brag about these capabilities. The unintended consequence by IBM and for its customers is that with all this capability “inside the box” they don’t have a commissioned sales force pushing these functions. IBM has a wide variety of software solutions that they are selling for distributed domains. They have software to manage the mainframe better. However, there is no end to end play that focuses on the mainframe as the central hub for enterprise security.

Wealth of Documentation

All is not lost, however. IBM and their Business Partners have a wealth of documentation and capabilities to demonstrate the strength of the mainframe for enterprise security. European customers can attend an excellent security conference in Montpellier, France from September 29 to October 2. The IBM Design Centers provide briefing centers and proof of concept capability tailored to an organization’s needs. There are IBM Redbooks describing the security functionality, including cryptography, analytics and Digital Certificate management for global authentication.

Shared Credentials to sign on via Biometrics and Multi-Factor authentication

There are also a wealth of up and coming vendors that can contribute to end to end security. Two that I’ve been working with are Callsign and Cyberfy that can leverage a mobile device for multi-factor biometric authentication in a consistent way across platforms. Throw away your userids and passwords that could be key logged and stolen and move to something that is truly unique to an individual. With these tools, a common authentication is used and managed across a wide range of servers and applications. Common authentication is the center of cross domain security management. Without a consistent authentication mechanisms, it becomes extremely difficult to correlate security activities across domains.

Operational Collaboration

I started this about breaches. A mainframe can provide and collect a wealth of forensic information across systems. As the host server for a tremendous amount of financial and personnel transaction processing, this information is used in real-time to prevent fraud because of the mainframe’s ability to run multiple transactions and database servers simultaneously, with integrity, while satisfying a service level agreement. This combination of functionality can work well with network attached applications and user devices.

These are the tenants that provide the foundation for hardening an environment. If a business or agency looks at what they have already and they find a mainframe, they’ll find a wealth of capabilities to lock down their end to end systems. The most important element is collaboration across organizations. Through collaboration, organizations can find weakness and inconsistency.  Once these efforts are undertaken, then the gaps can be identified and the acquisition of new products can be done intelligently.

Start Locking down systems before it’s too late

If anyone needs assistance getting started in locking down their systems, give me a call. Don’t wait until you’ve been breached, it will only cost more to solve the problem. As has been said, an ounce of prevention is better than a pound of cure.

Webinar April 15th: Mainframe Security – How good is it? Unfortunately – only as good as the End User device accessing it

Vicom

hosts a Lunch ‘n’ Learn Webinar presented by

Raytheon_logo

April 15, 2015 12-1PM EDT

Call in: 888-245-8770 passcode 206580

Presentation Slides will be posted here prior to the call

Presentation Abstract:

For years, the IBM mainframe has been the benchmark for secure transaction and data base processing. It’s considered hacker resistant, via a hardware and software architecture that inhibits buffer overflows, which are the bane of Trojan Horses, viruses and worms.

The modern PC, smart phones and tablets are rife with malware and identity spoofing. As long as an end user is the systems programmer for these devices, there will continue to be problems. If a userid can be spoofed on the end user device, there isn’t much to prevent them from accessing back end servers of all types that these devices may be connected. Businesses spend enormous sums looking to detect problems and attempt to better manage these devices.

Raytheon Cyber products takes a different approach. They compartmentalize infrastructure to create a more secure computing environment. E.g., separating Internet traffic from internal business systems. They’ve simplified operations so that the end user behaviors and server access barely change. The result is an environment that prevents malware intrusions and data theft. Detection products are nice, but how much will a business spend on unplanned forensic efforts and brand loss marketing should a theft occur? Raytheon’s approach simplifies the hybrid deployment model and reduces the risk at back end servers, such as the mainframe, and can help to lower overall security deployment costs.

This session will introduce the “battle tested” Raytheon Cyber products to commercial customers. It will demonstrate how compartmentalization of networks, data and applications can simplify end-to-end operations while preventing attacks. It will show how their technology is complimentary to existing Hybrid infrastructure. They’ll also introduce some of the future deployment models they are considering to further prevent attacks on electronic business.

Presenters’ Bios:

Jim Porell is a retired IBM Distinguished Engineer. His IBM roles included: Chief Architect of Mainframe Software (10 years), led Business Development for the mainframe (3 years), Security and Application Development marketing lead (3 years), Chief Business Architect for IBM Federal Sales (2 years). He’s presently a partner at Empennage, developing its marketing and investment possibilities. Jim is also on the Advisory Board of startups: Callsign and Malcovery. He’s a sales consultant to Vicom Infinity. In each of these roles, Jim is focused on the secure and resilient deployment of Hybrid Computing solutions across server architectures and end user devices (e.g. smart phones, tablets, PC’s).

Jeremy A. Wilson, is a member of Raytheon’s CTO Council & the Director of Customer Advocacy. Mr. Wilson works closely with Raytheon’s Executive Leadership Team focused on solving information sharing challenges for their extensive portfolio of customers including the Department of Defense, Intelligence Community, as well as Civilian and Commercial agencies. Mr. Wilson has over 15 years’ experience in Multi-Level Security and Cross-Domain Solutions. Prior to joining Raytheon in 2005, he served as the Chief Technology Advisor and Architect for both SAIC and General Dynamics. In these roles, Mr. Wilson held a vast number of responsibilities such as System Design, Technical Assessments, Security & Policy Auditing, Strategic Planning, Proposal Generation, & Certification & Accreditation. Mr. Wilson has spoken at number of technical events and sessions and is a member of the Armed Forces Communications and Electronics Association (AFCEA), National Defense Industrial Association (NDIA), Association of Information Technology Professionals (AITP), and the Information Systems Security Association (ISSA).

Emerging Technology and the role of the mainframe

I recently was given a list of emerging technologies and asked how the mainframe, and in particular, z/OS, is relevant to those technologies. It’s a great question. Unfortunately, it’s important to understand the source and motivation of the question. Sometimes, the  questioner is looking for an excuse to bury the mainframe. They’d have the unintended consequence of not finding or looking for synergy with the mainframe. In other cases, there is genuine curiosity. I’m going to go with curiosity in this case and give my best effort to respond.

Systems of Record are where data resides. Systems of Engagement are where information is accessible to end users. In many cases, PC systems are both a System of Record and a System of Engagement, so there is a thought that these “commodity” systems are “good enough” to handle all workloads. That couldn’t be further from the truth. Complexity, scale, security and business resilience are some of the problems that occur when commodity devices become the sole “solution” to problems. However, there is another major problem – operational silos. This occurs when one organization solves “one problem” while another organization solves and manages another business problem. Complexity and risk occurs when multiple organizations depend on each other to replicate data or share data for different purposes. This is where system security and resilience are at risk. It also requires duplication of data and duplication of effort to manage that data. Any duplication adds to costs.

I like a different approach. It’s based on leveraging the best of all technologies: commodity front end devices, such as PC’s, Smart Devices and the Internet of Things (definition later); Commodity servers for data transformation and hosting applications; large-scale servers for hosting and managing access to data, including a large amount of data manipulation and processing. In this hybrid environment, the goal is to bring the applications to the data. There will never be a single copy of data (e.g. backups, disaster recovery, cloning), nor will there ever be a single server to process that data. However, by sharing data and reducing copies, a simpler deployment model is possible. In addition, cross-platform security and resilience should be a part of the solution so that data is processed on a “need to know” basis and applications are highly available, end to end. It doesn’t make sense to have a back-end server (System of Record) that is 99.999% available if the front end infrastructure (System of Engagement) is full of availability and security issues. Our goal is to provide an end to end deployment infrastructure that provides efficient integration across the components and technologies necessary to meet a business’ workload needs. In the process, the business or government agency should dramatically reduce operational cost and complexity, while improving security and business resilience. This infrastructure can meet or exceed service level agreements and provide investment protection for the future.

Given the above context, here are a couple of emerging technologies and my view of where a hybrid approach can help them or not. I say my view, but the reality is, my good friend, George Thompson of IBM, provided the first pass at this list. We’ve been collaborating for many years. In my role as a consultant to Vicom Infinity, George is the principal IBMer I’ve been working with toward challenging customers to save $2 million dollars on IT expenses. So here we go:

1.     Digital Security

There are many ways to look at this. But most importantly, collaboration is king. There will never be a “Single Sign On”.  However, there are multiple sign ons with shared credentials. A good example of that is Apple has the same sign on credentials for it’s Apple Store, iMessage and iCloud, among other applications. But taking it a step further, they’ve introduced biometrics through finger print readers on their smart devices. Other forms of multi factor authentication can be deployed. There needs to be a source for “the truth”. There are several large banks and governments that have leveraged z/OS RACF for hosting digital certificates as the basis for authentication across applications and devices. One bank has stated that they avoid $16 million in annual license fees from third parties by hosting and managing their own digital certificate infrastructure on their existing mainframes.

Beyond authentication are digital footprints necessary for forensics utilized for Cyber security fraud, theft and rogue insider activities tracing. There are so many products that can collect logs and monitor those footprints. z/OS and Linux on z have been leveraged as collectors of these logs to allow for processing across workloads and to look for anomalies that might not be detected otherwise, if each different organizational unit was processing the request. The New York Police Department (NYPD) deployed a product from Intellinx that captured end-user activities across their agency. Up until that deployment, each of their 30+ applications had a unique home-grown audit capability. Intellinx enabled them to eliminate the “silo’ed” audits and combine them into a single commercial off the shelf (COTS) product. It also allowed them to find anomalies across the entire application suite that may not have been easily detected, manually, by their silo’ed offerings.

2.       Virtual Personal Assistant

Cognitive computing is evolving rapidly. Speak into your phone or tablet.  Ask a question or request a task be executed and your “wish” becomes their “command”.  Simple requests are executed on the device itself (e.g. call a person).  But many requests go to a central “cloud based” service.  The request gets parsed for context, a knowledge base is queried and the result is provided to the requestor.

I don’t see the mainframe “operating” the Virtual Personal Assistant, at this time. However, I do see the mainframe as a source for the knowledge base for a wide variety of applications. If you ask to query your account balance, does the bank make a copy of that “up to date” business record and send it to the VPA server? No. The VPA is the System of Engagement. It translates the request into a query. The query is sent to the relevant server which processes the request and sends the results bank. The VPA then translates that into spoken word or some form of viewer by launching a device app to display the results. These are not mutually exclusive processes.
Going back to Digital Security, the back end server that processes the query could use the Digital Security provided by the previous authentication of that device. It could also send a challenge request directly to the device, as a form of multi factor authentication, to ensure it wasn’t a fraudulent request, such as a phishing attempt. Collaboration is critical.

3.       Smart Workspace

I found an interesting definition at a Johnson Controls website.  What was most interesting to me is that I know of Johnson Controls from their utility infrastructure monitoring devices, also known as SCADA or system control and data acquisition devices (e.g. thermostats, system monitors). And that leads me to think of the security of those devices. But I digress. Their vision is around Social Computing, Mobile and Collaboration so that the workplace of the future is a virtual office where you feel a sense of community with your peers rather than feel isolated. This includes document collaboration, video and screen sharing, smart walls/boards/screens that are touch sensitive. These can all be considered the System of Engagement. What’s the mainframe and z/OS role? The source of the files/documents. The authentication server that coordinates the integration of each of the pieces of this “virtual community”. The fault tolerant backup of the critical data elements. The workflow scheduler that “turns on” and coordinates the myriad of parts of a very large virtual community.

4.       Software-Defined Anything (SDx)

The idea here is that instead of physical servers and physical devices or appliances, the IT world evolves to virtual appliances and virtual application images. The mainframe can be viewed for three major virtualization capabilities.

1. Logical partitioning – PR/SM LPAR – that enables the mainframe to be carved up into separate entities. This really is a physical partitioning, though and probably doesn’t apply.
2. z/OS – originally known as MVS for Multiple Virtual Storage – for it’s ability to run multiple applications and data types within the same operating system image.
3. z/VM – with its ability to run multiple operating systems and therefore, multiple application servers simultaneously and on demand.

Within z/OS and z/VM, there is software defined networks, memory and storage that enable direct sharing between workloads. In some cases, with new hardware definitions, only pointers to data are shared between applications to dramatically improve performance latency, reduce virtual and real memory and improve security, resilience and scale of the end to end workload.
That’s not to say that z/OS and z/VM are the answer to Software Defined Anything. The Anything is workload and solution dependent. These systems can participate effectively as part of a bigger solution to reduce costs and improve the solution qualities.

5.       Affective Computing

This is an area that probably doesn’t have direct ties to a mainframe.  As defined in wikipedia:  it’s about computer science, psychology and cognitive computing. Think about robots that attempt to mimic human activity.

I don’t see arms and legs protruding from an IBM mainframe nor a mainframe chip within a robot, yet. I still see mainframe connections. One is through security. These robots need to be smart. They need to get their “smarts” from some source. That source needs to be secure. Authentication must occur. The robot will then “do things”. Are they transactional? These are things that can be done with a mainframe.

6.       Neurobusiness

The Gartner Group definition is “capability of applying neuroscience insights to improve outcomes in customer and other business decision situations.”  To me, insights equals analytics.  Analytics on the mainframe and z/OS is fantastic. Why? It’s got the data.  There are many businesses that look at trends, anomalies and other analytic insight to improve sales, identify fraud detection, forecast future trends and leverage existing data and analytic capabilities on the mainframes. There are also businesses that are running analytics on commodity servers against commodity hosted data and joining those results with analytics run on mainframes against mainframe data.

The definition, looking at other sources, is applied to training and decision making.I don’t necessarily see the mainframe participating in that aspect.

7.       Prescriptive Analytics

Described by wikipedia as the third and final phase of business analytics (BA) which includes descriptive, predictive and prescriptive analytics. Vendors such as SAS and IBM’s SPSS have been on z/OS for many years. SAS has described the lack of value of “looking in the rear view mirror” rather than looking ahead to how you can get value for many years. The mainframe has plenty of the business data. These vendors have brought the applications to the data in order to gain the insight and provide the business value.

8.       Data Science

A pseudonym for analytics.  There are multiple parts of analytics:

  • The data.
  • The application that analyzes the data.
  • The application that presents the results.

The mainframe has long been excellent at hosting and processing the data. The System of Record.  Data visualization is the role of the System of Engagement and commodity hosted devices. If a business wants to “copy the data” in batch, host it on a commodity server and then process it and display it on a commodity device, that’s their prerogative. But what does that cost them?

  • Time – necessary to make the copies.
  • Network – bandwidth to make the copies.
  • Storage – to host temporary and production copies of the data.
  • Compute Capacity/Scale – that’s used to move the data, instead of processing it.
  • Environmental – energy, floor space and cooling for the copies of data.
  • Money – for all this “excess” capacity.
  • And lest we forget – Security – to make sure that the “need to know” aspects of the particular data copied is handled the same, regardless of where it resides and
  • Resilience – back up capabilities for all the extra servers and data.

Needless to say, many businesses are leveraging near real-time analytics against the System of Record hosted on a mainframe and leveraging the best capabilities available on mobile devices and PC’s for the visualization of the results.

9.       Smart Advisors

This is a System of Engagement. It could be a human or it could be the dissemination of data to a human. This is not the role of the mainframe. That Advisor needs to get its “smarts” someplace. As demonstrated earlier, the mainframe, z/OS, their data and their analytic processing can contribute to those “smarts”.

10.   Speech-to-Speech Translation

This is a System of Engagement. Not necessarily the role of a mainframe. Once it’s translated to actions, the mainframe is happy to oblige and process the request. It can also work to ensure the authentication of the user/device requesting the translation, when necessary.

11.   Internet of Things (IoT)

This typically refers to the myriad of new devices that are being made accessible via the Internet, e.g. Home appliances, Light bulbs, cars, cameras, etc. For this reason, the Internet naming convention was running out of addresses (IPv4), so a new addressing convention, IPv6 was created to handle the demand. Most of these devices are Systems of Engagement. They need to securely access Systems of Record.

The mainframe and z/OS have introduced IPv6 to enable direct connection to those devices. As defined earlier, the security of those devices and the monitoring of them can be handled on a mainframe. These devices are not islands, nor is the mainframe. They can easily collaborate to bring the best of all worlds into new solutions and other emerging technologies. IBM recently launched a foundation for the Internet of Things.

12.   Natural-Language Question Answering

At the front end, this is a System of Engagement. To get the answers, Systems of Engagement and knowledge bases are required. I don’t envision the mainframe doing the natural language parsing, but I certainly envision their role in preparing the answer and securing the connection from end to end.

13.   Complex Event Processing (CEP)

I love math. Patterns, Fractals, Recursion. Macro and Micro views. Computers are outstanding at repeating patterns. Many of the problems that I see in contemporary society are because we are too close to a problem. If you step back a little, a look at “the bigger picture”, you can see patterns repeated.
Event processing is like a Dispatcher. Every operating system has a dispatcher at a kernel level. Many operating systems cannot dispatch disparate work simultaneously because those systems don’t know how to balance the needs of the many and prioritize them against the needs of a few. Deadlocks, overcommitment and race conditions occur for unsuccessful systems. z/OS and the mainframe have demonstrated excellent capabilities for avoiding these issues and provide granularity, at a business level, for balancing the processing needs. More importantly, many years ago, they created workflow processing that deals with the success or failure of prior tasks to determine the next task. At a micro level, z/OS has been executing Complex Event Processing for decades.
Now, the term applies to end to end business solutions that include multiple Systems of Engagement and multiple Systems of Record. There are several middleware solutions that enable the mainframe and z/OS to participate and to manage CEP and be managed by CEP across a disparate group of systems and devices.

14.   Big Data

This is primarily dealing with the System of Record and the analysis and processing of the data within Systems of Record. z/OS and the mainframe have demonstrated the ability to process their own data, the data of other systems and to have their data processed by other systems.
Security, storage management and resilience of the data on both z/OS and other servers can be managed from z/OS as well.

15.   In-Memory Database Management Systems

As a database, this is a System of Record. For many years, middleware, such as CICS for z/OS, has provided an in-memory database management system. New applications are looking for SQL and other industry standard interfaces to these in memory databases. z/OS and the mainframe are capable of meeting these needs and working in collaboration with other systems that provide these capabilities. In addition, z/OS and the mainframe can provide authentication and resilience services for these databases.

16.   Content Analytics

Content analytics is the act of applying business intelligence (BI) and business analytics (BA) practices to digital content. Companies use content analytics software to provide visibility into the amount of content that is being created, the nature of that content and how it is used.
This is another area of System of Record. z/OS and the mainframe have a variety of middleware associated with content management, including archive functions for media streams, documents, and other non-relational data types. In turn, there are analytic solutions available on the mainframe and off it to process that data.

17.    Hybrid Cloud Computing

By definition, this includes internally managed clouds in concert with externally accessed clouds by a business or agency. zEnterprise is the premier Hybrid Cloud platform supporting Public, Private and Community clouds across heterogeneous architecture and supporting many cloud infrastructures including OpenStack. Enough said.

18.   Machine-to-Machine (M2M) Communication Services

This is related to the communication between the devices associated with the Internet of Things and the success of IPv6. As stated earlier, the mainframe and z/OS are already enabled for this form of communication.

19.    Cloud Computing

Somewhat redundant with the Hybrid cloud computing item above, both z/OS and the mainframe provide cloud hosting capabilities. Allen Systems Group’s Cloudfactory/Mainframe has enabled much of the functionality of z/OS to be accessed and provisioned via an interface that is similar to Amazon Web Services.

20.   Gesture Control

By definition, this is a human computer interface and therefore a part of the System of Engagement. This is not a role that I envision the mainframe to undertake. The gestures will be interpreted and  actions are taken, as a result.  Some of these actions may be directed toward transactions or applications hosted on the mainframe and z/OS.

21.   In-Memory Analytics

There are several approaches to solve this problem. Architecturally, there are some valuable differences.

  1. For the x86 and Power architectures, IBM has delivered:
    IBM’s DB2 10.5 with BLU Acceleration, typical queries in an analytics workload have been shown to be more than 1,000 times faster than other leading databases. Innovations in BLU Acceleration include:
    ·        ‘Dynamic in-memory” columnar processing providing not only dramatic analytics performance – up to 25 times faster -– but also the ability to scale for expanding Big Data needs without the limitations imposed by traditional in-memory systems.
    ·        “Load and go” simplicity which allows clients access to blazing-fast analytics transparently to their applications, without the need to develop a separate layer of data modeling.
    ·        “Parallel vector processing” for high-performance data analysis in parallel across different processors.
    ·        “Actionable compression,” providing as much as 10 times storage space savings where data no longer has to be decompressed to be analyzed.
    This DB2 is typically called LUW – Linux, Unix and Windows version. In this case, the BLU acceleration is not available on Linux for System z implementation of DB2. However, through database connection middleware, the z/OS data can be accessed by this product.
  2. Hadoop is an open source implementation of a large scale analytic server that has been deployed on the mainframe by IBM and by Veristorm’s zDoop offering. This can leverage most of the z/OS System of Record databases and make them accessible to the Hadoop File System running on the mainframe or other Hadoop servers.
  3. Another hybrid implementation is the IBM Database Analytics Accelerator (IDAA). This is a co-processor that works with flash (memory) copies of data on z/OS and can process a query 1000x faster that z/OS might on it’s own. There are several operational benefits to this approach:
  • Security – the authentication, access control and logging are all done in the context of the z/OS user that initiated the query request. This simplifies audit and analysis of user behaviors.
  • Latency from OLTP to Analytics – It provides near real-time access provided to transactional data where other systems might be using a copy of time that takes a lengthy time to unload, transfer and reload prior to its availability for analytic processing.
  • Cost – it provides commodity analytic server costs without the need for extra management as an independent server instance and the costs of security and resilience associated with that.

22.   Activity Streams

This are generated at a System of Engagement. Activity streams are generally associated with Social Media applications. There are a number of products that will take these feeds and coordinate them across Social Media platforms. I am not aware of any of them running natively on z/OS, though I do believe the IBM Connections can run on the mainframe. Some of the z/OS management activities can be processed as activity streams and posted to social media. This is valuable where an internal wiki might be used to manage or display mainframe system status.

23.   Speech Recognition

Speech recognition research and development has been going strong at IBM for almost 50 years. Throughout that time, more than 200 IBMers have contributed to the significant advancements in this field. That being said, it is a System of Engagement.
Middleware is available, as part of multi factor authentication, to leverage speech recognition and patterns as a login method that can be passed to the mainframe. Speech recognition middleware can also be leveraged to begin applications or start tasks on z/OS and the mainframe. Customers have used these techniques to simplify the human interface to z/OS.

That’s the end of the list I received, but not the end of my thoughts on Emerging Technologies.

The Evolution of the Mainframe.

Database Processing.

By nature, database processing deals with the System of Record. IBM’s DB2 platform was originally deployed in the early 1980’s after a successful research product. Later, on a separate code base, the DB2 for Linux Unix and Windows was created with a similar programming interface to the mainframe version. Mainframe customers demand consistency and a legacy that they can count on. Once implemented by IBM or other vendors and then successfully used in production by a customer, the customer expects that code to run “forever”. I can tell you that I’m working with a customer now whose original code was written in 1969 and they are many generations of hardware and software old and out of service, but they are still working with integrators to keep it running.
With those requirements in mind, IBM has developed a philosophy that many of the new technologies will be deployed on the DB2 LUW version first. If successful, that functionality will be later integrated into the DB2 for z/OS version. If it is unsuccessful, there is no harm in not adding it to the mainframe version as it might be just a niche offering.

Evolution of new technology

Many of the System of Record emerging technologies listed in this blog will share a similar fate. Where they have not been implemented on the mainframe yet, they will be considered for the future based on their customer exploitation merits. You’ll notice I used a phrase: “not on the mainframe, yet”, a couple of times above. That’s because I believe it. Some of these emerging technologies will become ubiquitous and demand their place on the mainframe in much the same way as the DB2 technologies have evolved. Think about TCP/IP, Linux, Java, and XML that were once emerging technologies and are ubiquitously deployed on the Mainframe and within z/OS. Even Linux inside z/OS…at least some of it.

Same code. Different Container. Different Operations Model

With the adoption of open source and open programming interfaces, there are few programs that can’t be deployed on z/OS or the mainframe. But just because it can be done, doesn’t mean it should be done. For example, z/OS is branded as a Unix system because of the Unix System Services component. By that brand, it means it can support a VT100 character based terminal as an input device. So, using the vi editor, if you type a character, it would immediately get sent to the mainframe for processing. Type the next character and the same response. Move the mouse and you burn mainframe mips chasing it. Yes, that works, but it is a complete waste of mainframe processing. Capture the edit on a PC or smart device, using local processing. When done or the user hits enter, send the bulk of the input to the server to be saved and processed. Data processing is what the mainframe is about. The punch card and the 3270 terminal are “old school” systems of engagement. The mainframe has adapted to those interfaces as well. The z/OS Management Facility is a new web services based implementation that can augment or replace the “old school” 3270 command line functionality. New products, such as IBM zSecure and IBM Wave put a graphic front end on the mainframe. It is collaborative and hybrid deployments such as these vs. replacement of a legacy.

What’s your cluster look like?

The mainframe Parallel Sysplex solved clustered computing in a dramatically different fashion than commodity servers. Rather than separate data into smaller consumable chunks that are then spread across database servers that are then attached to clusters of application servers that need to have round robin workload balancing for some semblance of security, the mainframe did it different. They decided to share all data across their “cluster”. Each system has direct access to the data much like a SAN. This access is now Fiber Channel based, via the FICON protocol. It runs on the same fiber optics wiring as the FCP protocol, but it has better latency, security, error correction and redundancy. Shared by these clustered servers is the Coupling Facility. This is a separate mainframe server or logical partition with three functions:

  1. High speed communications (peer-to-peer) between the “cluster members”.
  2. Lock manager for read/write access to the shared data across the cluster.
  3. Data cache for the most recently used data (in memory) to avoid disk access for high volume transaction processing.

Commodity database server developers are beginning to make their own “coupling facilities” with a fraction of the functionality, performance, scale and reliability built into the mainframe Parallel Sysplex capabilities.

System Integrity

In 1973, IBM issued an integrity guarantee that any problems found in their code on the mainframe that could, and I’m paraphrasing, give an unauthorized person an undeserved authority, promote a program from user space to kernel space without authority or could manipulate or destroy data without authority would provide a fix at no charge, as quickly as possible. Recently, the guarantee was updated. But there is a reason for the guarantee and that’s baked into the hardware and software architecture of the mainframe. The hardware creates a boundary and set of instructions to manage the transition between user and kernel (system) processing. It also enforces boundaries on memory within the operating systems. As a result, a “buffer overflow” between user space and kernel space will be detected by the hardware. So in case there was poor programming on behalf of the operating system or middleware, the hardware is smart enough to detect the error and abnormally terminate the offending user program. It’s not to say the mainframe is hacker proof. Better stated, it is hacker resistant. The hardware architecture will detect and inhibit the majority of problems found from poor programming on commodity systems. In fact, execution of “portable” code on the mainframe has found a number of integrity problems that might have gone unnoticed on commodity software.

Multi-tenancy

Baked into the mainframe hardware and software is the goal that multiple applications, databases, and for that matter, operating systems, can run simultaneously without negatively impacting each other. The impact being integrity and performance based. The mainframe workload management capabilities for managing service levels and scale are legendary. Many more eggs can be put in a single basket. Far few servers need to be deployed. As PC servers became rampant in data centers, VMWARE came along and began to offer up to 80:1 reduction in the number of physical servers required to deploy workloads. Simultaneously, z/VM and Linux might offer an 800:1 reduction of the same workloads. Operational fiefdoms being what they are, an organization might be satisfied with the savings of an 80:1 reduction. Working collaboratively, there might be a 400:1 reduction in servers, with some on virtual blades and some of the mainframe.

Summary

This is not an either-or proposition, though organizationally, it might feel that way.  Collaboration and sharing is at the foundation of the mainframe architecture. Now, through networks and multiple servers, that collaboration extends to Systems of Engagement.

A truly modern IT organization can realize the benefits of collaboration in application development, business resilience, time to deployment, operational risk and simply put, cost savings.

All of the above examples are to prove the point of the value of hybrid and collaborative computing. There are many offerings that provide similar value to those listed in this post.

Secure Hybrid Computing – White paper

I’ve written several times about Hybrid Computing, data patterns, reducing costs, etc. Well, it’s all wrapped up in a White paper now, available at this site. HybridComputing.CaseStudy.Whitepaper.V1

Readers of this blog know that it is mainframe centric. They might be surprised that there is no mainframe nor, for that matter, any other platform mentioned in the paper.  Why?  It’s not about technology.  It’s about business.  Saving money and growing profits.  If that doesn’t get attention, I don’t know what will.   When systems are built together collaboratively, instead of fiefdoms, any business can save on a wide variety of server platforms.

In the paper, you’ll see that IBM and Vicom Infinity are working together with infrastructure that can be used to demo or develop proof of concepts to put these ideas to work. Don’t hesitate to reach out to me for additional information.

So get started thinking differently.  You can start with this blog and white paper. Happy programming. HybridComputing.CaseStudy.Whitepaper.V1

If you started with computers in the 1990’s and hated mainframes – you were right

Wow, I said it. It’s hard to admit for some of my peers, but looking back, there is a lot of truth to that sentiment. I’m not talking about traditional transaction processing used for insurance claims, point of sale, ATM’s, travel reservations, etc. That’s where money is being made and continues to be made, with many businesses leveraging mainframes as their System of Record.

No, that’s meant for folks that were and are  interested in client/server deployments. Folks that wanted to use modern application development tools. Folks that wanted to incorporate multi-media and streaming into their workflows. These might be considered the System of Engagement today. Oh my goodness, the mainframe was not at all prepared to handle that work. Decisions were also made for departmental computers. Can you imagine a mainframe of that era in a closet? A closet? Where was the chilled water, the raised floor, the humongous air movers?

IBM_Main_9000

It was a traumatic change within IBM to begin the modernization of the platform and 1. keep it relevant to where it was having its “legacy” value (the prime objective) and 1A. make it relevant to capture new workloads. This was something tolerated, but not completely embraced at the IBM executive levels.  Ignoring the success of “desktop computing” and new business opportunities, I always felt that the mainframe sales mantra was kind of the opposite of the famous Star Trek line: To boldly go where we have already sold before. So the real goal appeared to be to keep our current mainframe customers happy.

The mainframe is a dinosaur. The mainframe is dead

There were voices, everywhere, saying the mainframe was dead, it’s a dinosaur. It chilled the hearts and minds of senior IBM executives too. There was a “new breed” of executive that wanted to look and grow new business lines, like the PC Server and Power Systems. Where did their funding come from? The profit of the mainframe. But as a result, mainframe R&D budgets were challenged. And even within the mainframe, as growth opportunities were considered, the development budget was spread even thinner across a wider variety of efforts. Some believed it was a cash cow, from which new opportunities should be funded. And then there were the mainframe believers that had to fight “the new status quo” to maintain their budgets. I could go on and on about these political battles inside IBM. There’s some TMZ quality stuff, but not what I want to discuss here.

Now, if you don’t want to read the story about what changes were made and how they were made, you can skip to the Summary of how bad it was and how good it has become.

The long road toward making the mainframe relevant to new workloads

Instead, I’d like to tell you about the changes that were made to make the platform relevant to new business opportunities. In the process, the positioning of the mainframe changed. It’s still a terrific System of Record, but now, unlike the 1990’s, it can be a viable System of Engagement for many functions, but not all. Because if I said all functions, I’d be lying every time I mentioned hybrid computing!

Let’s start with a big mainframe app that made the decision to go “all distributed” and never looked back. That hadas much to do with money as it did with technology.

Dassault never looks back. Goodbye mainframe

Dassault System’s CATIA application – a CAD/CAM – engineering modeling/rendering application.  They were charging $8K a user for the MVS version. Each user had a specially built 3270 Graphics Adaptor for model renderings on a “green screen”.2250Graphics

There were so many instructions required to do the renderings that each terminal had it’s own hardware interface. At the time, there were only 4096 interfaces (called Unit Control Blocks – UCB) on the system. As more engineers created modern airplanes, the customers had to buy more systems, which were not cheap to acquire, nor operate. Ultimately, the z/OS operating system eliminated the constraint on the number of UCBs, which would have greatly helped CATIA customers, but had the added benefit of helping all customers when Parallel Sysplex was introduced later.

CATIA grew bored with the proprietary 3270 GA box as their graphics rendering device. They wanted color and simpler graphics.CATIA-14X-actuators-135x90

This is just what the UNIX community was demonstrating. So they created a new product, based on AIX. They thought the UNIX based version should charge about $10K a seat. Some cooler graphics were worth the price, but not too far off the current customer price. They did a market study. They found other UNIX based engineering programs were getting about $25K a seat  And those systems were selling well. What a profit margin! So they decided to charge $22K a seat, in order to undercut their competitors.  They also yelled from the rooftops “the mainframe is dead” in order to transition to the new model, but more importantly, the new revenue and profit stream. What was surprising to me was that Dassault was partially owned by IBM. My job was to talk to the “chief yeller” and convince them of the “new mainframe strategy” which was still vapor ware, from their perspective. All requests to stay on the mainframe fell on deaf ears, and rightfully so, in hindsight.

SAP tells IBM what’s wrong with the mainframe

A “big mainframe” app that transitioned to commodity systems was from SAP, which was mainly a bunch of ex-IBMers that had a really good idea. SAP R/2  was created with a mainframe back end application and database server which also drove the green screen terminal front ends.
SAP R/3, the next major version,  was a “client/server” app that was all about the end user experience (GUI) and commodity server deployment models. It didn’t run on the mainframe when it first came out. SAP was one of the voices bragging about the “death of the mainframe” in order to transition customers to their new product. It took years of negotiations and modernization of the mainframe infrastructure and pricing changes for SAP R/3 to “return” to the mainframe, but it was only a partial return….The Database, entirely, and then the data access methods and some of the applications. The presentation layer has expanded beyond the desktop into mobile and web services. The original application programming language remains the same, though it has been augmented with Java.

Let’s peel the onion and look into modernization specifically for SAP R/3. In my opinion, it was that application, or better stated, that company that forced IBM to change the mainframe. IBM wanted new workloads. Here was a big one that took business away…Let’s win it back.

Network Performance and System Integrity

First, client/server was based on the internet and local area networks. They were rapidly transitioning to TCP/IP hosted networks – an open network. IBM was still stuck on SNA, it’s proprietary network that was actually a de facto industry standard as most systems could interoperate with it. Today, SNA applications can run over TCP/IP….it’s about layering, but that’s too detailed. An implementation of TCP/IP for MVS is created. Check the box. Done. Well, wait a minute…how well does it perform, SAP asks. It takes about 186,000 instructions to send a small chunk of data out and receive the acknowledgement of successful delivery. By contrast, it takes AIX about 18,000 instructions to do the same. So MVS is about 10 times slower communicating with TCP/IP than other platforms. Well, no it’s not. The SAP R/3 application is very chatty. The application server makes a network connection to the database server on the mainframe. For each end user request that ends up making a database call, there are actually 26 send/receive pairs for each transaction. So the overall pathlength is 260 times worse than AIX for a single SAP transaction. At the mainframe hardware and software price levels, that isn’t going to sell well. I also made a joke of this at the time. What’s Lassie, RinTinTin, Benji and TCP/IP on MVS?

Three movie stars and a dog. Drum roll please. This was a bad situation. Unfortunately, this joke lasted several years. A lot of effort went into looking at alternative implementations.

Anytime there is a network, data must move from one system to another system. In a terminal environment, such as a 3270 screen update, it might only be 100 bytes of data get transmitted. But a video could be millions of bytes. IBM, from it’s outset, has been historically great at creating architectures. Some architectures have layers of functionality. Communications or networking architectures are famous for the seven layer diagram,

Osi-Layer-Modelwhere each layer has an architectural purpose. Application layer, transport layer, session and network layers being a subset. IBM has an image processing application originally developed for medical records called Imageplus. The performance of image processing (and later streaming) over the network, because of the rigidity and adherence to the layers of architecture by the original code base, was horrendous. The physical image was COPIED TWELVE times, once for each layer and a few times within the application, as it traveled through the operating system. Eventually, the networking software was modified so that it could copy the data from the application buffers directly to the transmission wires after a quick test of the applications data integrity (aka make sure there would be no buffer overflows or attempt by an application to send system or privileged data that they didn’t have authority or “need to know”.  This data integrity testing, by the way, is inherent throughout the mainframe operating systems and middleware).

So the Imageplus benchmark was also critical to understanding how to re-write the TCP/IP stack. If the system knew the SAP Application server was running on an AIX server that was channel attached to the mainframe, the new performance was 5,000 instructions. (Didn’t need to do some network error handling  nor routing and this saved instructions). If the Application server was attached via a router, then it was 10,000 instructions.  Hallelujah….we were better than the rest of the world…..

Data Character Translation

Not yet, says SAP. Those desktops, those application servers on Windows and UNIX/AIX? They like to read and present data in ASCII format. Your mainframe and  DB2? They like to process data in EBCDIC format. It takes seven instructions to translate each byte. So all that network gain that was just achieved is lost by data translation. As a result, effort is put into the path length of data translation. Eventually, it gets down to three instructions per character. With the network savings, this is roughly equivalent, end to end of a commodity system performance. With price/performance, it’s still not good enough. As a result, the mainframe hardware, operating system and DB2 middleware are changed to natively support unicode characters, wide characters (for character based languages) and many other code pages. There was, and is, no longer a penalty for code conversion. Finally, after several years of work, SAP R/3 sales of hybrid computing began.

New workload pricing

But the transition isn’t complete. The introduction of Linux on System z leads one to believe that the mainframe is a viable application server. Not so fast, says SAP.  IBM charges software by the MIPS or capacity of the ENTIRE machine.  So if a customer adds one processor to run Linux on a mainframe that has 9 existing processors, the Linux license charge would be for 10 processors worth of work vs. the single processor that is executing. I’m going to rapidly jump ahead, but specialty engines were created such that new workloads, such as Linux, JAVA on z/OS, distributed connections to z/OS databases, some z/OS system utilities and more are charged by the actual engine or capacity of the workload vs. the entire capacity of the machine. Finally, incremental workloads could be added to the platform at or near commodity prices for the equivalent work. But we’re still not done with this evolution in hybrid computing.

Stop copying data. Share memory instead

Avoiding data moves was critical to the success of the “new” TCP/IP within z/OS. So next, the same concept is applied WITHIN the mainframe. And this takes several iterations with continuing benefit.

In a Blade server or  rack mounted system, a special back plane or Top of Rack switch may be deployed so that communications within a ‘server box’ can go point to point, ala the mainframe channel to channel interface and avoid router overhead. Under z/VM with hundreds of Linux images running and then z/VM to z/OS within the same physical server, IBM created the hipersocket which uses dedicated hardware memory, instead of wires, to facilitate intra-physical server communications. The most recent announcement upgraded the hipersocket technology to leverage RDMA memory that can be shared between server images so that only a pointer to the data is transmitted.  Each system can have direct access and eliminate intra-server copying of the data.

Increased capacity and memory available for new workloads

All of the above mentioned activities result in fewer data moves within the operating systems, virtualization layers and hardware servers themselves. This avoids instructions, real memory and latency required to do data moves. This allows the processors and memory to be available for other workloads. This allows greater scale of the environment. And because the mainframe has been doing this for over 40 years as a balanced architecture across instructions, memory, networks and data, instruction pipe lining, many levels of memory caching and more, it is capable of putting, colloquially, 10 pounds of “stuff” in a 5 pound bag without fear of breaking. Said another way, there is no fear in running the system at 100% utilization for very long periods of time.

The “Legacy” workloads get modern as well

So that’s just a snippet of the technology that went into “modernizing” the mainframe for new workloads. There is significant other infrastructure, such as the Parallel Sysplex, Geographically Dispersed Parallel Sysplex (GDPS), evolution from BiPolar to CMOS technology, reductions in cooling, electricity and floor space, packaging of “spare” parts in the box for fail over/fault avoidance and on demand deployment for capacity upgrades and more that made the mainframe better for traditional workloads, and over time, those benefits have applied to the new workloads as well.

But then, how much does it cost?

Historically, the mainframe has been considered an expensive alternative to “commodity” platforms that are “just good enough”.  Much has been done to change the pricing structures.

Technology Dividends

Since the introduction of OS/390, IBM and other vendors have been providing tech dividends such that the price/mips has decreased regularly and with most new processor introductions.

Capacity Backup

The “z” in System z, stands for Zero down time. While not reality, it is a goal, both at the technical level and through software license charges. IBM introduced Capacity BackUP (CBU) pricing for backup or disaster recovery servers. A fraction of the production price is paid for the hardware server and no software license charges are paid. Many other vendors have accepted this model as well. Why? Because a business isn’t getting any productive work out of the backup servers. When production work moves over to the CBU server, then the software license charges transfer to that machine. A business may do several “fail overs” a year, to test recovery operations without incurring a license charge. When compared to “commodity” servers, this can be a tremendous savings. In those environments, multiple servers may be required and each of those will be charged a production license fee.

Development pricing

If a business wants to get more workloads, then it needs to cater to application developers. Those developers need a sandbox or low priced system to create that code. Unfortunately, software license charges being as they are, all MIPS on the mainframe were created equal and treated as production. Rational was a brand acquired by IBM. They had created some fantastic tools that worked across platforms and now included OS/390 and later z/OS. Those applications would require CICS, IMS, DB2 and other vendor middleware as the target production environment. However, IBM was charging the same price for the production licenses. Most other vendors, including Microsoft, Sun, Oracle and HP were giving away the run times when they sold their development tools. As a result, the mainframe was not targeted by most vendors as a viable deployment platform because the “cost of entry” was way too high.

IBM finally released a mainframe architecture that ran on PC and Power servers – the z Personal Developer Tool (zPDT) to vendors. This created a competitive cost for vendor developers to target the mainframe as a production platform. It was several years later that IBM decided to make this available to businesses or customers. Finally, there was a common tool set, Rational Developer and free run times and test environments that were priced competitively with “desktop” development tools. Better yet, a developer could “take a mainframe home” with them, as the zPDT could run on a Thinkpad laptop.

Parallel Sysplex pricing

Clustering computers is an important way to scale. IBM invented the Parallel Sysplex architecture that accomplished three things, with two being a huge differentiator from “commodity” server clustering.

  1. Load balancing work across the cluster
  2. Shared Lock management of database and file records across the cluster
  3. Shared cache of the database and file records across the cluster, allowing all cluster members to have direct access to the data.

The direct access of shared locks and data cache allow additional servers to be added to the cluster without having to reorganization or partition the data. The performance is such that there is linear scalability as each new cluster member is added to a maximum of 32 systems.

Software was then discounted for customers that chose to use the Parallel Sysplex as a clustered model over a multi system model that had non shared data.

New workload pricing

This was discussed earlier. The net is that specialty engines were defined that allowed no software license charges or deeply discounted license charges for Java, Linux and other workloads.

Hybrid Computing pricing

With the zEnterprise servers, the mainframe added the ability to have direct attachment to an IBM Bladecenter and in the process, have a dedicated communications channel and a dedicated management channel for these two devices. I already described how performance can improve when channel attached when discussing the SAP R/3 deployment between AIX and z/OS.

The IBM Data Application Accelerator (IDAA) is a query server, running on x86 blades that can get direct access to DB2 on z/OS. By shipping queries over to the IDAA server, read/write operations can continue on z/OS while read only performance of the query can be parallelized across multiple x86 engines and see performance improvements that could be 300 to 1000 times better than if run on z/OS. More importantly, the IDAA is “just an engine”. Security of the database and audit remains within DB2 for z/OS. Data copies over the network can be avoided. Some copying is inevitable, but flash copies can be made in moments, instead of long running data extract, transform and loads (ETL) to other platforms. This provides a significant advantage to completing near real time analytics and enables new decisions to be made, prior to completion of a transaction. There are too many cost savings to mention, but some are: less disk space, faster response time, improved security, better scale, less network bandwidth consumed, etc.

Mobile pricing

In the 1960’s and continuing through the early 1980’s, it was agents of a business that executed transactions. Travel agents, Bank Tellers, Claims adminstrators, ATMs and Point of Sale terminals. The consumer watched. These transactions were typically short, easily measured and predictable. The majority of the transactions occurred during business hours, such as 9AM to 5PM. Hardware and software were priced for the size of the machine dictated the overall software license charges.

The “Internet of things” has changed that. Consumers can now do things on their own, where an agent was previously required. They can start a claim, transfer money, deposit a check, buy tickets to performances and for travel. They can do this at any time of the day or night. They can query prices as often as they wish, waiting for the “sale price” to be good enough to actually purchase something. This is dramatically grown the number of transactions being executed and kept the Systems of Engagement and Systems of Record up around the clock. Any down time means potential and real loss of business. In April of 2014, IBM introduced the Mobile price usage to provide a discount for these type of consumer issued transactions. The goal is to make the monthly pricing more predictable and comparable with “commodity” platforms.

Summary

How bad it was (1990’s)

Let’s review how bad the mainframe was back in the early 1990’s.

  1. Water cooled, very large, heavy servers with large amounts of electricity and cooling required.
  2. An Internet network that was horrendously slow at the hardware and software levels.
  3. A Green screen Command oriented interface (similar to the DOS Shell that lived a very short while on PCs). Yes, you could “screen scrape” to make it look more graphic, but many considered that “lipstick on a pig”.
  4. A communication architecture that was inherently inefficient as it copied data between system components, many times, before it went on “the wire”. In PC LAN terms, it was a Ring 4 implementation (and worse)  instead of Ring 0.
  5. The wrong character set was used: EBCDIC, which required software changes and data conversions to and from ASCII or Unicode.
  6. Expensive software licenses for production and development.
  7. Proprietary, green screen oriented, development tools
  8. Proprietary programming interfaces
  9. Lack of Commercial Off the Shelf (COTS) new workloads

How much better it is today

  1. Comparably sized servers to commodity systems that use less electricity and cooling.
  2. An incredibly faster Internet connection for inter-system, intra-system and intra-cluster communications.
  3. While the command line interface is still available, a web service oriented management interface is now available.
  4. A communication interface that passes pointers to data and shares the data rather than copying it within a system image, across virtual system images and across a cluster of systems.
  5. Adoption of Unicode, ASCII and EBCDIC as base characters simplifies data consolidation on z.
  6. Technology dividends and a wide variety of pricing options make the TCO and TCA of System z competitive with commodity servers
  7. Single, cross platform tooling from Rational that includes mainframes, UNIX/Linux, Windows and Mobile systems (via the Worklight acquisition)
  8. Portability of applications within z/OS and Linux in a variety of open languages
  9. A far broader range of COTS software is available for Linux on z and z/OS

Some additional items that are better than commodity servers

  1. Shared data access, with integrity, scale and resilience,  across systems
  2. Shared Analytics and Transaction Processing to a single database (that can be shared across a cluster), while maintaining Service Level Agreements
  3. Hacker resistant (not Hacker proof) architecture that inhibits data and buffer overlays. The System Integrity guarantee has been in place since 1973.
  4. Capacity Backup licensing and acquisition dramatically reduce Disaster Recovery costs and procedures.
  5. System z hardware avoids 80% of the errors that might occur in a PC server environment. No down time nor failover would occur. System z software and hardware work together to drive System availability to 99.999%.
  6. Workload management of thousands of applications and hundreds of thousands of client connections enables dramatic cost savings over alternative servers.
  7. Incremental additions of software workloads without the need to install new hardware due to on board “spares” available to be turned on, on demand.

Summary

If you haven’t considered a mainframe in the last 20 years, it’s quite understandable. But if you don’t start reconsidering it today, you are making a fundamental mistake.

The modern mainframe is greatly simplified from where I began 40 years ago. I’m happy to say I  may have had a little bit to do with that 😉  Happy programming.