Tag Archives: z/OS

Porting an Enterprise App to System z – my experience. Part 1 of 4: The Basics

At the end of 2016 and lasting a few months into 2017, I completed a proof of concept port of a large Enterprise Application that had been running on the Amazon Web Service Cloud to Linux for System z. This was a Docker based application written in Java…so of course, it would be trivial to port. WRONG. While the application is in Java, it called many pieces of open source code. Much of that code hadn’t been ported to System z yet or wasn’t widely adopted. What I thought was a very simple exercise turned into a six month effort.

What I’d like to do, via a series of blog entries, is share my experience in the hope this might help some other organization decide to do a similar porting task. While I’ve been working with mainframes for decades, this was my first Linux porting experience. So I’ll be describing how this experience helped me to Master the Mainframe, though that title seems reserved for university students.

This could be a book, but by breaking it up, it might be easier to understand.

  1. The Basics: High level overview of the application, the development environment, the system set up required to begin the porting exercise and the scope of the port.
  2. The Good: The people who assisted and taught me, the things that ported easily and the simplicity of getting started via the Linux Community Developers system.
  3. The Bad: the new open source for System z, the modifications necessary to open source to run on z, the debug experience and the time necessary to complete the porting process.
  4. The Future and Value. Regardless of the bad experience, there is a great business value in getting these types of Enterprise Apps on System z.

The Basics

This entry is more about the basic desktop development environment and targeted production on x86 based cloud servers. This is the traditional development environment and primary target of the applications. I needed to fit in and work with this environment before I could ever consider doing the unique activities necessary for success on Linux for System z.

Application Overview

Because of the proprietary nature of the application and intellectual property, I’m not going to name the vendor or application. This overview of the workflow is simplistic, at best, so as to not give away any trade secrets. The vendor is an early start up with an application to handle biometric authentication in a marvelous way. This application has a callable interface to start a request and then, using cloud based services, does some communication with the end-user, does some analytics based on a number of system defined characteristics, logs a number of things for diagnostics, audits and future analytics, provides a go/no-go decision back to the original caller and has a number of applications and user interface applications to manage the cloud deployment. Finally, they have an enormous test suite to emulate and automate the entire end to end workflow.

Development Environment

This vendor was doing all of their development for the x86 platform and originally with any Linux version supported by Amazon Web Services. This included Centos/Red Hat versions. Their first development environment used Maven tooling and pom.xml scripts that targeted deployment into Docker containers. They used Github capabilities to clone and manage the source code libraries within their business.

The first major effort was for me to establish a development environment on my computer and prove that I could work with and build a workable x86 version of the code. My computer of choice was a MacBook Pro 2010 model running the latest MacOS at the time. First thing to do was turn my MacOS into a real developers machine. I installed xcode, Atom, SourceTree, Filezilla and Docker which enabled me to look like a Linux system, edit source files intelligently, manage access to the source files, facilitate cloning of source and execute the code. There were other local variant software that I needed to install using a script that was provided to me. I love the Mac, as did the vendor, who’s entire team used it, so that was really helpful. I then needed a VPN into their system and I was off and running. I used this set up for about two months. One thing I learned, painfully, that the 2010 Mac was SLOOOOOWWWW. What would take 15 minutes to do for them might take me over an hour. So I decided to upgrade to the MacBook Pro Touch Bar quad-core 16GB memory laptop. Now my work completed faster than their 15 minutes, which was a blessing. I can’t stress enough the value of a good starting point on the desktop or laptop for this type of development! It was life changing to me.

Open Source and Operating System Dependencies

The first version of the vendor code used Centos/Red Hat as the target deployment environment. This code runs over 50 Docker containers. Each container is intended to be as small, memory wise, as possible, so it is scalable in a largely virtualized environment. As mentioned earlier, they also used Maven and pom.xml scripts to do their container builds. Each container had a script that would gather necessary pre-requisite open source parts, their Java code and then do the build so there was an executable container. Naming conventions, versioning and more were part of these Maven scripts. 90% of the open source code used was available in a binary form as either an RPG, ZIP or TAR file. Those binaries were either copied into the vendor’s library system or accessed via a URL and dynamically downloaded from the internet during the build process. I’ll get into the System z ramifications of this in the Good and Bad blog entries.

This is the development environment I began my first phase of the port. The prototype I was building was only for a functional test to prove the code could work. We intended to accomplish our test goal with only 40 of the 50 containers being ported. We completed what we thought was a good test level of code after a few weeks of my porting. But then we identified some critical test containers were missing. Unfortunately, the vendor didn’t use the same library management rigor for their test suite and I was going to have to re-base my code.

Rebasing the code and changing Development environments

Unfortunately, that was the tip of the iceberg in changes. I mentioned this was a startup vendor. They had two very large customers that were testing the code when I started. They realized they had a scaling problem, early on. They also realized they had some development inefficiencies. When you get a RedHat, SUSE or Ubuntu distribution, there is a lot of software in the package, like getting the z/OS operating system, MacOS or Windows. As such, the kernel of the large distribution Linux systems can start at 250MB and easily be over 750 MB’s. When you add 100’s of virtualized containers, each having that size as the basic footprint, the overall system runs out of memory pretty quickly. However, if the kernel can start at 18MB and run about 50MB, then greater scale is possible. As development of this application began, the Alpine Linux distribution began and it met the small size requirement. The vendor began to rebase all of their test code and as much of the open source code as they could on Alpine to take advantage of this reduced memory benefit. That was and is an excellent business decision on their part.

Maven is a fairly complex environment for building docker containers. It works. Both the vendor and I proved that it could work. However, in addition to open source code, there are now open docker containers that can be leveraged, as is, to be included in place of an open source binary. However, in order to do that with Maven, the Docker definition files of these open containers must be cut and paste and then modified as part of the Maven script syntax. And each time the container definition changes in the open source world, the Maven scripts need to be hand modified. So the vendor dropped Maven as the base for their container build environment and switched to using Docker build definitions directly. Again, I applaud the vendor for doing this. It simplified the development environment, it gave them access to additional open source code repositories and made everything easier to manage.

The unintended consequences of the vendor’s change from Maven to pure Docker and Centos/RedHat to Alpine was I had to start all over on the port. I’m going to save the details of that for the Good and Bad statements as they are directly applicable to System z.

As far as Linux for x86 cloud environments, this vendor has a world-class development environment, working to create the most reliable, secure and efficient application possible. Ultimately, those attributes must apply to System z deployment as well. I’ll be covering that status in the other blog entires.

Advertisements

Porting an Enterprise App to System z – my experience. Part 2 of 4: The Good

I provided a simplistic overview of what I intended to port to Linux for System z in Part 1. The original application was built for x86 systems. As such, all binaries are built to run on x86 systems. The Docker containers that these applications run in are x86 binaries as well. So my job was to create the Linux for System z (aka S390X) binaries, with as little change as possible.

I also mentioned that this was a start up vendor with whom I was working. I had done some business work to show them the value of porting the application to System z, but they were neither skilled in, nor able to afford their own System z. So I gave them the challenge to let me prove to them this could be successful and they took me up on it and agreed to work with me.

Vendor Development Team

While a small development organization, they still had over 25 very proficient programmers and testers. I was extremely fortunate to have their lead developer as my mentor. He and I would meet at the same time, for an hour every day to check on progress, educate me or diagnose any problems I might have so that I could make progress for the next day. Most important is he was learning about the mainframe and intrigued by the possibility of business success as I was, so it was a great experience for both of us. I greatly appreciate the time and effort he put in to make this a success.

Linux Community Development System for z

Where do you find a mainframe? You ask the Community Development team. Eva Yan at IBM was instrumental in approving the vendor and I to get access to Docker containers on the mainframe. Cindy Lee at IBM was fantastic, with her team, to help show me where all the open source for z was available in the community and Martha McConaghy at Marist College, the host for vendor access to the LCDS was terrific in helping me to keep the system running.

Docker is a great place to work with portable code. My development environment was an x86 Docker container environment that pointed to the S390X Docker on the LCDS system as the target deployment environment. I’m not going to spend time giving you the details on the set up, but suffice to say it all works well.

Scalable Virtualization

I didn’t mention before that the vendor is on a different continent. So imagine from my laptop, a VPN to the vendor’s libraries where some code is downloaded, merged with code on my desktop, Docker on my desktop puts all the parts together, ships it securely to the Docker on the mainframe image, does the build and sends results back to me. So if this process took 10-15 minutes to do on my laptop, suffice to say, when you add up the networks and bulk distribution of code between systems and do the build, it’s going to take more time than a single system. Doing a single container build, for the first time, was never correct. My mantra, for years, has been “Next time for sure!”. I’d fix what needed fixing, get a little farther the next time, repeat the mantra and try again, until finally, I’d get a successful build. The time or performance isn’t a problem when building a single container. It’s when you build 40-50 containers at once, or as I liked to call it “The Big Bang”. Then it was hours to do the build on the mainframe, instead of an hour on x86. You’d think that was the bad, right? It was good, because a call to Eva, requesting some more memory and processors and I moved to a very competitive deployment environment. For just like my MacBook 2010, which was under configured for this scale of development, the initial Linux system I was given was an under configured virtual machine. With a simple config change, within moments of my request, and literally no down time, I was up on a larger Linux image, due to the magic and wonders of the underlying scalable z/VM server image.

Open Source Access

The LCDS virtual images came with RedHat kernel as the base, with some optional software included, but that was all. I need several dozen pieces of open source software to add to my environment to build my S390X binaries. Again, I don’t want to spend the money to buy a supported Linux distro for this Proof of Concept. I’m directed to Sine Nomine Associates, and in particular to Neale Ferguson. He could not have been a better ally in this effort. First and foremost, he pointed to libraries on their servers where I could retrieve many of the binaries that were necessary. It was such a relief to find many of the rpm’s I needed on their website. As mentioned earlier, I was a newbie to this kind of porting. He spent considerable time mentoring me on both basic Linux and System z specifics to keep me moving along. As important, Neale was on the Docker band wagon. He’d begun building docker containers with specific functionality. I was able to take several of his containers and imbed them into the containers I was building to simplify my deployment.

The Linux Community also has Github repositories of System z ready open source code. I bookmarked those pages and visited them often. I’m pointing links in a Bibliography in Part 4.

The real dilemma came when the vendor switched from Centos to Alpine as the base Linux kernel. Alpine was so new in late 2016, early 2017. While both are Linux derivatives, the syntax of packaging applications is different. As such, Docker builds for Centos are different from Alpine. Because I was doing a proof of concept, it really didn’t matter whether I used Centos or Alpine. However, the longer my porting took, the faster the vendor was converting their code to Alpine, so now, I would have to make “throw away” changes to support Centos.

Worse than that, there was only one person even trying Alpine on the mainframe and that was “some college kid” as a research project. How could I build an enterprise application on a system that one unpaid person was supporting? That person was Tuan Hoang and I am indebted to him. He was a Marist College student. I began contacting him late in 2016. While he had the kernel ported, there were very few packages for Alpine ported to S390X. He was quickly up to the task. I gave him a list of high priority packages. Each night, I’d get an update of what he completed. Each day, I’d build some more containers off his evening’s work. It got to the point that only third-party open source packages were not done by him. This really got my development effort going. But the best news of all was at the end of my project. Tuan had worked so hard to get his “prototype” of Alpine for System z going that the Alpine community accepted S390X as a primary target platform. All Alpine packages would be available on S390X, simultaneously to their deployment on other hardware architectures. It was painful, but it was wonderful at the same time.

Good people make life easier

What I found throughout this porting effort is there is a wonderful community of people dedicated to the support and value of System z. They were very accommodating and helped reduce my efforts greatly.

Porting an Enterprise App to System z – my experience. Part 3 of 4: The Bad

As I’ve explained in Part 1 Basics and Part 2 Good, I did a proof of concept port of an Enterprise Application from Amazon Web Services on x86 to Linux on System z in 2017. The good news was I got to the point I needed to, the bad news was it was more than difficult to get there.

Linux is not Linux

Open Source is open source…available to anyone. The story goes that Linux is Linux. Close, but not quite. Unfortunately, architectural chip bits (Big Endian vs Little Endian) is one of many differences and there is code that needs to change to handle these differences. There are also supported platforms, “tolerated platforms” and unsupported platforms. This is the problem with Linux on System z. The marketing hype is that all of Linux is supported on z. The reality is somewhat different. Not necessarily insurmountable, but you better know what you are getting into.

Supported Platforms

When Linux on z is a supported platform, then the packages for System z are supported in binary format, such as an RPM file for Centos/RedHat or an APK file for Alpine. This is the best case and makes development of S390X on par with other platforms like x86 and ARM.

Tolerated Platform

In this case, the code may work on S390X, but it’s a source code build. You can find instructions on Github for S390X as to how to modify the code to get it to work on the platform. But if you want to use that code, it could take a long time to

  1. Do all the things necessary to manually modify the code
  2.  execute the code to create a binary.

Let me use an example. Couchbase is the non-SQL database preferred by the vendor I worked with. Someone within IBM is maintaining a script on Github to help others leverage a particular release of Couchbase. Since Couchbase is constantly coming up with new versions, those edits need to be constantly updated. I would have preferred a binary version of the code, but IBM doesn’t do binaries…They only do source. And in order to make Couchbase work, there are pre-requisite source modifications necessary to Go, Python, cmake, Erlang, flatbuffers, ICU, jemalloc,  and v8 javascript. Manually doing all that is necessary takes a few hours. I was fortunate to take all of these changes and build a docker script that was several hundred lines long to automate the build of Couchbase by doing all this work. When I ran this container build, it took over an hour to complete. I had to do this many times before I got the automation script to work properly. And that automation is only good until the next release comes out. In comparison, with an x86 rpm, this takes a couple of minutes and the Docker script is about 15 lines long. In the end, I got what I needed, but the level of effort to get there was tremendous. I also mentioned container memory size in Part 1. This Couchbase container on z was over 1 GB memory. This put a tremendous strain on Docker and we found a few bugs as a result. The size was a combination of Couchbase and all the prerequisite code  to build Couchbase. So I had to modify the Docker build to delete all the prerequisite code which included source, binaries and documentation. This got the container down to a more reasonable execution size.

BTW, when I complained to IBM leadership about the lack of support for Couchbase, they suggested I use a different, easier product that was available on z. Since I was porting and not a true developer, this was not a possibility for me. I had begun negotiations with Couchbase toward this goal, but stopped working on it when the prototype ended.

Unsupported Platforms

There were two cases where neither the open source community nor the Linux on z community had guidance on how to get a particular open source program on the mainframe. In those two cases, I was able to get through the code, successfully and get a binary for System z. The good news was it was pretty simple to do. I was quite fortunate. If it hadn’t been easy, this could have ended the project earlier than I had hoped.

Docker containers are not portable across hardware architectures

I’ve seen some hype that once you get it in Docker, it’s portable to any Docker. I’ve heard a few mainframe customers believe any Docker container can run on System z. I’ve also seen articles in IBM sponsored magazines that purport this to be true. This is a combination of marketing hype and misunderstanding. It all depends on the container architecture/binary and source code. Typically, a container binary for a particular architecture, such as x86, should run in a Docker container on any x86 platform, even if it’s a different operating system running Docker. For example, Docker running on x86 version of RedHat 7.3 could be running containers with RedHat, SUSE, Alpine, Ubuntu, etc, as long as they were built for x86. Similarly, I ran Docker on a RedHat 7.3 image for Linux on System z, and had containers with Centos and Alpine running with binaries for S390X.

The only containers with source code that were portable were built exclusively with interpretive languages, such as Java or Python. Those could be portable across hardware architectures. Many of the test cases used by this vendor fit into that category. However, as soon as one of those interpretative languages makes a call to open source code middleware (e.g. Couchbase), then the container is no longer portable across architectures because the middleware is not supported across architectures.

Docker Stability

When I started this project, Docker on z was pretty new. Once in a while, it would have issues. Only a couple of times did it require Marist College to restart my z/VM guest. The other times, it would automatically recycle itself and get running again. I believe it’s improved  since we began the port effort until now, but it’s been a few months since I tried it. I’ve heard from others, though, that the experience is better. During our Big Bang builds, we would peg each of our System z processors at 100% busy for a few hours. The fact that it would stay up and continue processing is a testament to the reliability of those large code tests.

Ultimately, I have a wishlist for the Open Source Community on z:

  1. Where source code changes are necessary, such as with Couchbase described earlier, supply a Docker build file to automate it for anyone that wants to do the build. It would be so much faster.
  2. Continue to lobby third-party open source middleware providers to support system z. In many cases, it takes a vendor, such as I was working with, to create that business case jointly to get it done, but doing that will lead to more usage on the platform. If you build it, they will come.
  3. Create more binary packages instead of source code update files. It greatly reduces the development time necessary for z unique porting. The more extra work necessary to support z, the less likely the x86 people  will move there.

The net of all this bad is the initial effort to support the mainframe is longer than it would be on x86. However, if you have the patience to get to Part 4: The Future and Value, you’ll find that you should be rewarded for the effort.

Porting an Enterprise App to System z – my experience. Part 4 of 4: The Value and Future

In Part 1 The Basics, Part 2 The Good and Part 3 The Bad, I’ve explained I did a proof of concept port of an Enterprise Application from Amazon Web Services on x86 to Linux on System z in 2017. The good news was I got to the point I needed to, the bad news was it was more than difficult to get there. But why did I go there in the first place?

The vendor for the Enterprise application was targeting the Financial Services industry for their initial deployments. This is the primary customer for IBM System z. Their beta customer is running z/OS transaction processing via CICS, but wants to authenticate customers using this vendor’s product running on Amazon Web Services. In order for CICS to call the AWS Cloud, it has to launch Websphere on z/OS to call the vendor’s  service on AWS. The vendor’s application has to do it’s task of authenticating users and get all the way back to CICS in less than 18 seconds so the transaction doesn’t time out. It’s a really powerful use of the vendor’s application and valuable to both the consumer and financial institution to avoid potential fraud or cybersecurity scams.

Java and Analytics run better on z/OS

I was told this vendor wrote all their code in Java, so I immediately began a plan to get this running within z/OS, since Java runs so well there, especially on the z14 systems. I also knew that in the time allotted to run on AWS for those 18 seconds, only three biometric/analytic tests could be completed on behalf of the consumer. I hypothesized that if the vendor app ran within z/OS perhaps up to ten analytic tests could be completed using the outstanding analytics and Java performance. However, once I learned of the number of open source middleware programs required and the complexity of porting them to z/OS, I went to Linux on System z as the target port.

Linux on z as a private cloud has more value than a public cloud

Using RDMA as the memory based communication between z/OS and Linux LPARs, I know it will take a bit more time than running inside z/OS, but much less time than going to a public cloud, so I hypothesized that eight analytics tests could be done instead of the three on AWS. And regardless of z/OS or Linux on z implementation, the vendor agreed that the software price would be the same as AWS. The net is, z would have additional analytic value, and given it’s hardware and software integrity and reliability, it would offer better security and business resilience than any public cloud provider.

So that’s what I set out to prove. Sadly, I got so close and the vendor changed their mind on their business strategy. They received a significant new round of venture capital investment, signed up several new financial firms to try their code and they decided to stick to their current cloud plan and stay off the mainframe, for now.

I still believe that my hypotheses as to the performance and value were correct. But the activity ended just before I was able to prove that. However, the exercise did confirm the possibility of getting the product on the mainframe successfully.

Docker inside z/OS? That would simplify things!

But what else is possible? I said in Part 3 that Docker containers are not portable across architectures. However, they are portable within the same architecture. There are some prototypes underway for Docker to run within z/OS. Given the way Docker works on other platforms, it would infer than any Linux on z containers could run unmodified within z/OS. If Docker for z/OS were to run on a zIIP processor, there would be no software license hits for z/OS. If that all comes to pass, that could lead to significant transaction and analytic value within z/OS and greatly simplify the system management requirements for these types of hybrid workloads, while improving the overall security, resilience and performance and reducing the operational costs. I would hope that a public announcement of this capability is not too far in the future.

Savings and Operational Strengths

That, my IT friends is a win for everyone. Any of the bad associated with a slightly more complex development environment can quickly be eradicated with a greatly reduced operational expense that has greater operational benefits than any alternative architectures might try to demonstrate. This type of workload makes for a very compelling end to end benchmark comparison as well. So while I didn’t succeed in getting the enterprise application to market, that was because of a business decision rather than a technological impediment. And the business decision was tactical, based on their new financials.

I learned a lot and documented many of the short cuts I took and set up required to make this development effort possible. I’m happy to share the experience if you’d like to undertake your own development effort. While I thought the end of the project was a failure, it’s unintended consequence, with the efforts of the great Linux for z community identified in Part 2, is that this will be easier porting for everyone that follows.

Bibliography

LinuxONE and Linux on z Systems Open-source Team

LinuxONE Developers Works

Neale Ferguson’s pre-built Docker containers for z

GitHub repository to S390X open source scripts  From this page, search for the package you are interested.

 

Miraculous cure for IT system bottlenecks!

What’s a bottleneck? From Dictionary.com, it’s “a narrow entrance, spot where traffic becomes congested”. In IT terms, it’s something causing slower operations or that inhibits a Service Level Agreement (SLA) from being met. The worst case scenario is a lot of IT shops are absolutely confident that they don’t have bottlenecks as they are meeting or exceeding their SLA’s. They couldn’t be more wrong!!! 

There are a wide variety of traditional methods for identifying bottlenecks. On an IBM mainframe, a business might use IBM’s Omegamon, BMC’s Mainview or CA’s SYSVIEW. On a desktop, it could be as simple as Microsoft Task Manager or Apple’s Activity Monitor. On networks, there are a many tools. At home, you might wonder if your ISP or internal network is running well, so you’d try Ookla’s speedtest.net. In the cloud, there are monitors for Amazon Web Services, IBM Bluemix, Microsoft Azure and Google Cloud.

Yet, none of these will find the modern IT system bottleneck. When you have an IT system bottleneck, there’s always someone to blame. But who is it? Is it the System Programmer’s fault? Is it the Application Developer’s fault? Is it the asphalt? Oops, wrong punchline. No, it’s the System Architecture’s fault. It’s a 1990’s mentality that looks at IT in operational silo’s and independently manages the systems. But hang in there for another moment. There is a cure.

The 1990’s methodology bases IT operations on server silos. The mainframe is independently managed from the Unix servers, which are independent of x86 servers, which are separate from cloud and mobile and desktop and network. Security is done for each domain. Business resilience is done for each domain. Budget’s are created and departments compete for more spend in their particular area. Some areas might claim they have a bottleneck and warrant more spending to resolve it. Next budget cycle, they’ll still have issues and want more.

Another type of silo-ed operation is looking at separate systems for Record, Insight and Engagement. Systems of Record are the master database and transactional systems that update those databases (e.g. credit/debit, stock sales, claims, inventory, payments, etc). Systems of Insight are the analytic systems (e.g fraud detection, sales opportunity, continuous flow delivery, tracking). Systems of Engagement are the human computer or Internet of Things (IoT) interfaces (e.g. mobile, IoT device, tablet, browser). Many businesses create silos to manage each of these areas independently because if you had ever tried to do this in the 1990’s, you’d hit a bottleneck or drive up IT costs too high. Funny how the systems of the 1990’s actually created the hidden bottleneck today!  But it can be fixed.

Where can you buy the “fix” for this? Is it via a software product? No. Hardware product? No. Cloud? No. Consulting services? Maybe. But the reality is every business can solve this pretty easily within their own environment. I guarantee that your business can far exceed current SLA’s and establish new business goals. In the process, your business can save tremendously in IT expense, while improving security and business resilience. The solution is pretty simple.

Stop copying data between systems! In the new API economy, all of the systems have been modified to allow for direct access to applications and data from other systems. The change is either philosophical and/or organizational for most enterprises. It’s all about managing the IT systems together instead of separate silos. That starts at an architectural level, with hybrid development systems and extends to hybrid operational systems that address end to end security, business resilience and performance.

If you’ve moved  data to another server to keep the Systems of Record separate from the Systems of Insight. Stop the move. Keep the data together. Systems like IBM’s mainframe are now capable of hosting both databases and analytics in a single system and improving analytic performance many times over separate Systems of Insight without impacting the SLA’s of the transactional systems. The applications  that access the Systems of Insight can be easily modified to point to the Systems of Record instead via updated device drivers without changing any code logic. This changes things like batch analytics, which might be used for fraud detection into real time analytics that can be used for fraud prevention. And in the process, businesses will save with reduction in storage, network bandwidth and system utilization, costs and time associated with copying the data. Products such as Rocket’s Data Virtualization Studio can provide the device drivers and mappings necessary for applications to share data from a variety of Systems of Record, across platforms. And new apps can be developed to join the data from different sources, including partner organizations or from “the cloud” to solve business problems in new and creative ways. These applications wouldn’t be possible without sharing data. Apache Spark technology is one means for collaboration across data sources.

There is no reason to copy data to move it closer to or tailor it for a specific System of Engagement. The API economy allows for applications to directly access the data or transactions on other systems via the API economy. New pricing options are available that allow for increased transaction rates, due to direct access to mobile, at a lower cost than traditional access methods. zOS Connect is one of the tools for making the API connection between mobile and transactional systems.

Regardless of how you might transform your business, the unintended consequence of standing still on current IT silo-ed operations is there are bottlenecks and slow downs in business systems that depend on heavily copying data and batch windows to facilitate copying. Direct access to data and devices is the future. The future is now. Begin the migration to hybrid operations management. If you need help in deciding how to look at your architecture differently, don’t hesitate to ask me.

 

 

Webinar April 15th: Mainframe Security – How good is it? Unfortunately – only as good as the End User device accessing it

Vicom

hosts a Lunch ‘n’ Learn Webinar presented by

Raytheon_logo

April 15, 2015 12-1PM EDT

Call in: 888-245-8770 passcode 206580

Presentation Slides will be posted here prior to the call

Presentation Abstract:

For years, the IBM mainframe has been the benchmark for secure transaction and data base processing. It’s considered hacker resistant, via a hardware and software architecture that inhibits buffer overflows, which are the bane of Trojan Horses, viruses and worms.

The modern PC, smart phones and tablets are rife with malware and identity spoofing. As long as an end user is the systems programmer for these devices, there will continue to be problems. If a userid can be spoofed on the end user device, there isn’t much to prevent them from accessing back end servers of all types that these devices may be connected. Businesses spend enormous sums looking to detect problems and attempt to better manage these devices.

Raytheon Cyber products takes a different approach. They compartmentalize infrastructure to create a more secure computing environment. E.g., separating Internet traffic from internal business systems. They’ve simplified operations so that the end user behaviors and server access barely change. The result is an environment that prevents malware intrusions and data theft. Detection products are nice, but how much will a business spend on unplanned forensic efforts and brand loss marketing should a theft occur? Raytheon’s approach simplifies the hybrid deployment model and reduces the risk at back end servers, such as the mainframe, and can help to lower overall security deployment costs.

This session will introduce the “battle tested” Raytheon Cyber products to commercial customers. It will demonstrate how compartmentalization of networks, data and applications can simplify end-to-end operations while preventing attacks. It will show how their technology is complimentary to existing Hybrid infrastructure. They’ll also introduce some of the future deployment models they are considering to further prevent attacks on electronic business.

Presenters’ Bios:

Jim Porell is a retired IBM Distinguished Engineer. His IBM roles included: Chief Architect of Mainframe Software (10 years), led Business Development for the mainframe (3 years), Security and Application Development marketing lead (3 years), Chief Business Architect for IBM Federal Sales (2 years). He’s presently a partner at Empennage, developing its marketing and investment possibilities. Jim is also on the Advisory Board of startups: Callsign and Malcovery. He’s a sales consultant to Vicom Infinity. In each of these roles, Jim is focused on the secure and resilient deployment of Hybrid Computing solutions across server architectures and end user devices (e.g. smart phones, tablets, PC’s).

Jeremy A. Wilson, is a member of Raytheon’s CTO Council & the Director of Customer Advocacy. Mr. Wilson works closely with Raytheon’s Executive Leadership Team focused on solving information sharing challenges for their extensive portfolio of customers including the Department of Defense, Intelligence Community, as well as Civilian and Commercial agencies. Mr. Wilson has over 15 years’ experience in Multi-Level Security and Cross-Domain Solutions. Prior to joining Raytheon in 2005, he served as the Chief Technology Advisor and Architect for both SAIC and General Dynamics. In these roles, Mr. Wilson held a vast number of responsibilities such as System Design, Technical Assessments, Security & Policy Auditing, Strategic Planning, Proposal Generation, & Certification & Accreditation. Mr. Wilson has spoken at number of technical events and sessions and is a member of the Armed Forces Communications and Electronics Association (AFCEA), National Defense Industrial Association (NDIA), Association of Information Technology Professionals (AITP), and the Information Systems Security Association (ISSA).

Mainframe Security – How good is it?

Two things about security that have been true for a long time:

  1. The Mainframe is the most secure platform in the industry
  2. Security is about People, Process and Technology

These are not mutually exclusive concepts. What’s important to realize, though, is the mainframe shouldn’t get a “Pass” on security processes because of its reputation. Unfortunately, I have encountered some really poor security management practices associated with mainframes at a wide variety of customers. These poor practices have put those businesses at risk. Mainframe technology and architecture can inhibit a number of security issues that other platforms regularly encounter, such as buffer overflows that enable viruses and Trojan Horses. However, poor management of data access, protection and audit can lead to data loss, theft and network attacks.

The Mainframe is NOT Hacker proof

There, I’ve said it and I’ve said it before. Mainframe systems have been hacked. There is one obscure case where some very old open source code was running on a mainframe. That open source code had been successfully hacked on other platforms. The attackers used a similar technique to get “inside” a business. Network attacks that drive a denial of service have also been successfully initiated. Both of these types of attacks were clearly avoidable. The worst attacks have come from insiders. Sometimes by accident, but unfortunately, sometimes on purpose. Not unlike Edward Snowden and Wikileaks, insiders have released confidential information stored on mainframes. In each of these cases, better security practices and the use of additional products and monitoring could have inhibited these data thefts.

Who is responsible for Mainframe Security?

Another problem that I’ve seen is lack of diligence by a business over the people managing the mainframe. The worst case scenario dealt with an outsourcer. The outsourcer mistakenly assumed that the mainframe was “hacker proof”. The owning business assumed the outsourcer knew what they were doing and didn’t audit the security of the systems nor the outsourcers running the systems. It turns out, the outsourcer wasn’t auditing the security either. In this instance, network ports were left open that enabled attackers a way into the system. Data sent over the network wasn’t encrypted, allowing attackers to sniff for critical data. Perhaps worst of all, many of the systems programmers had operational access to modify all system datasets without going through change management.

It doesn’t mean that anything negative happened at this customer. But it certainly could have happened. And even within that particular outsourcer’s business, they had other customers that were properly protected. This seemed to be a local anomaly. The lesson learned, however, is that the owning business should own audit responsibilities and the analytics associated with their security operations and data protection. Simple tools could be deployed and routinely run to “health check” their operation. Either the outsourcer or the business can run those tools, but the business should check the results as a normal part of grading their outsourcer.

“Knowledgeable” People may be your worst enemy and largest risk

Social engineering is a terrific mechanism to get access to restricted data. I’m not going to give examples, other than to say that asking the right questions of the right people can result in acquiring data or privileges that someone isn’t supposed to have. Again, the human element at work in circumventing security. One of my favorite “social engineering” episodes was visiting the security director at a large bank. Physical security was tight. The team there asked if I could break into the building. My normal response is never on the first time, but usually on a second pass, it is possible. As I left the cold building in December, I realized I left my jacket in the Director’s office. The security guard told me to go back up to the third floor unescorted. I was in the Director’s desk chair when he returned from the rest room. Isolated incident? Unfortunately not. That’s just my favorite example, without the details.

Replicated data doesn’t mean that security control is replicated

It doesn’t matter what platform or server you are using for a database. Far too many businesses make a copy of production data so that the Quality Assurance team can do system modifications and stress test before making updates to a production system. Application developers might also have access to this production data to test the next iteration of a business application. QA and development may be outsourced to another business. That business could be in another country. Without audit or security management of the test and development copies of the data, there is the very large possibility of data theft or leakage.

The weakest link is the End user interface – PC’s, Smartphones, Tablets

Plenty is known about the security problems associated with end-user devices.  Management of those devices falls on the owner of the device. Where Bring Your Own Device or BYOD is allowed, then the management practices for the devices will differ by the number of devices. Unfortunately, users save the userid and passwords of back-end servers on these devices. As a result, any server that these devices access is at risk of mismanagement or spoofing of credentials of the end-user if that device is stolen or hacked.

There is hope. Collaborative security should be the norm.

Earlier posts of mine discuss collaborative and hybrid computing. This is as important with security as it is for business resilience, storage management and application development. By looking across the IT infrastructure, a business can identify risk more clearly than a business that has fiefdoms protecting their smaller domains. Analytics, audits, identity management and data protection done across the IT infrastructure will help a business reduce risk and save on overall costs.

Don’t let security by obscurity result in the unintended consequence of data loss. Stay vigil. Keep an eye on your systems, your system administrators and your users.