Category Archives: Mobile Computing

Miraculous cure for IT system bottlenecks!

What’s a bottleneck? From Dictionary.com, it’s “a narrow entrance, spot where traffic becomes congested”. In IT terms, it’s something causing slower operations or that inhibits a Service Level Agreement (SLA) from being met. The worst case scenario is a lot of IT shops are absolutely confident that they don’t have bottlenecks as they are meeting or exceeding their SLA’s. They couldn’t be more wrong!!! 

There are a wide variety of traditional methods for identifying bottlenecks. On an IBM mainframe, a business might use IBM’s Omegamon, BMC’s Mainview or CA’s SYSVIEW. On a desktop, it could be as simple as Microsoft Task Manager or Apple’s Activity Monitor. On networks, there are a many tools. At home, you might wonder if your ISP or internal network is running well, so you’d try Ookla’s speedtest.net. In the cloud, there are monitors for Amazon Web Services, IBM Bluemix, Microsoft Azure and Google Cloud.

Yet, none of these will find the modern IT system bottleneck. When you have an IT system bottleneck, there’s always someone to blame. But who is it? Is it the System Programmer’s fault? Is it the Application Developer’s fault? Is it the asphalt? Oops, wrong punchline. No, it’s the System Architecture’s fault. It’s a 1990’s mentality that looks at IT in operational silo’s and independently manages the systems. But hang in there for another moment. There is a cure.

The 1990’s methodology bases IT operations on server silos. The mainframe is independently managed from the Unix servers, which are independent of x86 servers, which are separate from cloud and mobile and desktop and network. Security is done for each domain. Business resilience is done for each domain. Budget’s are created and departments compete for more spend in their particular area. Some areas might claim they have a bottleneck and warrant more spending to resolve it. Next budget cycle, they’ll still have issues and want more.

Another type of silo-ed operation is looking at separate systems for Record, Insight and Engagement. Systems of Record are the master database and transactional systems that update those databases (e.g. credit/debit, stock sales, claims, inventory, payments, etc). Systems of Insight are the analytic systems (e.g fraud detection, sales opportunity, continuous flow delivery, tracking). Systems of Engagement are the human computer or Internet of Things (IoT) interfaces (e.g. mobile, IoT device, tablet, browser). Many businesses create silos to manage each of these areas independently because if you had ever tried to do this in the 1990’s, you’d hit a bottleneck or drive up IT costs too high. Funny how the systems of the 1990’s actually created the hidden bottleneck today!  But it can be fixed.

Where can you buy the “fix” for this? Is it via a software product? No. Hardware product? No. Cloud? No. Consulting services? Maybe. But the reality is every business can solve this pretty easily within their own environment. I guarantee that your business can far exceed current SLA’s and establish new business goals. In the process, your business can save tremendously in IT expense, while improving security and business resilience. The solution is pretty simple.

Stop copying data between systems! In the new API economy, all of the systems have been modified to allow for direct access to applications and data from other systems. The change is either philosophical and/or organizational for most enterprises. It’s all about managing the IT systems together instead of separate silos. That starts at an architectural level, with hybrid development systems and extends to hybrid operational systems that address end to end security, business resilience and performance.

If you’ve moved  data to another server to keep the Systems of Record separate from the Systems of Insight. Stop the move. Keep the data together. Systems like IBM’s mainframe are now capable of hosting both databases and analytics in a single system and improving analytic performance many times over separate Systems of Insight without impacting the SLA’s of the transactional systems. The applications  that access the Systems of Insight can be easily modified to point to the Systems of Record instead via updated device drivers without changing any code logic. This changes things like batch analytics, which might be used for fraud detection into real time analytics that can be used for fraud prevention. And in the process, businesses will save with reduction in storage, network bandwidth and system utilization, costs and time associated with copying the data. Products such as Rocket’s Data Virtualization Studio can provide the device drivers and mappings necessary for applications to share data from a variety of Systems of Record, across platforms. And new apps can be developed to join the data from different sources, including partner organizations or from “the cloud” to solve business problems in new and creative ways. These applications wouldn’t be possible without sharing data. Apache Spark technology is one means for collaboration across data sources.

There is no reason to copy data to move it closer to or tailor it for a specific System of Engagement. The API economy allows for applications to directly access the data or transactions on other systems via the API economy. New pricing options are available that allow for increased transaction rates, due to direct access to mobile, at a lower cost than traditional access methods. zOS Connect is one of the tools for making the API connection between mobile and transactional systems.

Regardless of how you might transform your business, the unintended consequence of standing still on current IT silo-ed operations is there are bottlenecks and slow downs in business systems that depend on heavily copying data and batch windows to facilitate copying. Direct access to data and devices is the future. The future is now. Begin the migration to hybrid operations management. If you need help in deciding how to look at your architecture differently, don’t hesitate to ask me.

 

 

Closing the gap on technology evolution

I recently saw a blog post by one of the Federal CIO’s. I can’t argue with their observations, though I think we may disagree on how to tackle the problem. That CIO is going to post their direction in future posts. I’m going to take a shot at my own direction in this post.

The following graph demonstrates that the US Government IT is falling behind Fortune 500 firms and way behind internet startups.
IT Curve acceptance
Federal CIO study graph

I remember having this debate with an IBM General Manager years ago when he was considering outsourcing some operating system components thinking that all programmers are created equal. There is a huge difference in maintaining a legacy of millions of lines of code vs. starting from scratch with something new. As important, starting over AND maintaining all the rules and regulations of the legacy, is also a very difficult proposition. It takes pre-existing knowledge for success.

This CIO faces a problem that is similar to many other businesses. It’s true for mainframes as it will be for Microsoft Windows and Linux systems in the future. There are millions of lines of “legacy code” in languages that are less popular today than they will be in the future. The inference is to move away from the legacy code toward a modern language where there are more skills available. As a factoid, there are more ARM chips in the market today than Intel chips. There are more applications being developed for iOS and Android than for Microsoft Windows and that’s way more than being developed for mainframes. So that might lead someone to believe that’s the programming model of this generation.  And as I’ve said in an earlier post, if your IT career began in the 1990’s and you hated mainframes, you were right….at that time….

But like everything, time changes things. IBM and vendor partners have dramatically changed what the mainframe was into a more modern computing environment. IBM spends over $1B in R&D for each generation of the mainframe that comes out about every two years now. I’m going to park that, for a moment, to go to another topic, that is more relevant to the skills discussion.

Patterns

Programming is about patterns. Patterns occur at a process level, in languages and in behaviors. There are three broader patterns at work here. Systems of Record, Engagement and Insight. I’ve written about that before, but Record deals with transaction processing, Engagement deals with the end user interface and Insight is about analytics. Most programming being done today is around systems of engagement – taking advantage of enhancements in smart phone, wearable tech (e.g. watches and fitness) and other devices that are the Internet of Things. GPS, accelerometer, touch, voice and biometrics are just a few of the advances that improve the human computer interface. The mainframe has avoided this programming area completely as a native interface. That makes complete sense. Ignored by many, though, is the fact that the mainframe has fully embraced leveraging those capabilities through interoperability and standard formats and protocols. They enable hybrid programming to reach out to those interfaces to simplify the deployment of systems of Record. In addition, they’ve integrated with Systems of Insight to enable real time analytics to be applied to traditional systems of Record to reduce risk.

This link will take you to a tremendous video about the z13 server and its ability to satisfy these new capabilities. Warning – it’s 30+ minutes long.

Where will the skills come from?

Another fear raised is that schools no longer teach “mainframe”. Perish the thought. While there are fewer “mainframe” schools than teach commodity system programming, there are a wealth of schools across the world that are part of IBM’s System z Academic Initiative. Checking their website, there are three in Maryland, close to the Federal government and very close to the agency head writing the blog. But you know, “you can’t trust the marketing” materials put out by a vendor. So I went to the Loyola College of Maryland, University of Maryland Eastern Shore (UMES) and Prince George County Community College web sites to see what they said about the IBM Academic Initiative. Honestly, the info I found was from 2011-13, other than Prince George which was up to date. So I reached out to the schools. UMES responded quickly.
“First and foremost, I would like to inform you that we are actively involved in the IBM Academic Initiative. Dr. Robert Johnson is the Chair of the Department of Mathematics and Computer Science is the lead person in the initiative. Further, they are currently in the process in moving into our new $100 million Engineering and Aviation Science Building which will significantly enhance our capabilities to support the initiative.”
Here’s a brochure for their program.

Most importantly, success is not a two-way street between IBM and the schools. It’s four way, including businesses/agencies and the students. The best schools will work with businesses to provide internships with students PRIOR to graduation. There is generally a very high (close to 50%) success rate in those students choosing full-time employment at the business they did an internship. I strongly encourage any business or agency concerned about future skills deployment to reach out to these schools and work directly with them. Experience shows that you’ll be very pleased with the results. UMES gave me their cell numbers if you’d like to reach out to me for a direct introduction.

Adopt New Technologies and dump the old?

The collective wisdom of the Federal CIO’s seems to point to new technologies as the “future” of programming. The referenced blog points to Uber, Siri and Facebook as examples of such applications and suggests they may be irrelevant in five years. (See Myspace as an example). New technologies grow up in a vacuum. There is no maintenance legacy. It doesn’t mean the legacy can’t work with them, though. A prior blog entry looks at 22 emerging technologies and their relationship to the mainframe and how hybrid computing can solve new business problems.

Let’s consider one of the new, cool tech referenced: Uber. I happen to have a chauffeur’s license (a story for another time) and am very familiar and active with Livery legislation. The Uber mobile application is actually very simple and easy to recreate. What makes them successful is their business model and practices. They hire drivers as contractors, therefore no tax consequences for Uber. They avoid the bureaucracy of Livery laws.

There is a state law that enables the New York City Taxi and Livery Commission (T&LC) to regulate who and what can be operated within the boroughs. This is for the “safety and comfort of passengers”. However, it’s big money. Medallions, per cab, have cost up to $750,000 just to put a car on the street and the T&LC limits the number of medallions. Cars from outside the T&LC are not allowed to make more than one stop in the city. They cannot pick up a passenger at an airport if they dropped them off more than 24 hours ago. The T&LC have 250+ officers in unmarked vehicles that follow and intimidate non-T&LC livery vehicles in the city. I witnessed a stretch limo being impounded by the T&LC when an upstate Livery firm dropped off the passengers returning from a NYC funeral at a NYC restaurant before traveling north. The second stop was illegal. In any event, other states (CT and NJ) got upset with this bureaucracy. They lobbied and a Federal law resulted to allow reciprocal rights to other states to operate without joining the T&LC. But upstate Livery can’t participate. The NY Assembly and Senate have had to modify laws to create T&LC’s in neighboring jurisdictions to allow reciprocal rights in NYC locations. Rockland, Nassau and Westchester counties have T&LC’s now. This is the third year that Dutchess and Ulster have legislation to enable reciprocal rights up for a vote. The NY Assembly has passed their legislation, but the NY Senate hasn’t. Last year, they decided to wait on Dutchess and Ulster until they figured out how to allow Uber  and Lyft to operate in NYC exempt from the T&LC bureaucracy. That legislation has now been created and will be voted on soon.

T&LC makes revenue on selling taxi medallions and collecting tax on fares. Uber & Lyft disrupt those economics. The livery vehicles pay $3000 per year for insurance. Uber/Lyft cut deals with insurance companies to lower that to $600/year to make them more competitive. The drivers must also have personal insurance on the cars when a fare isn’t present.  Laws are now being enacted to allow “Transportation Network Companies”  (TNC as they generically refer to Uber and Lyft) to get “fair access” to markets in NY without this bureaucracy. I’ve developed an app which will qualify the “local” livery company to operate as a TNC to reduce their costs and in turn, reduce the cost to consumers…will the government allow that? Will the Dutchess and Ulster laws pass? This is more about big money, venture capital and paid lobbyist getting to the legislative leaders, than the small livery companies trying to stay in business. We’ll see if the legislation and the bureaucracy will enable the small livery services to morph into a mini-Uber. The legislation enables the Commissioners of Insurance and Motor Vehicles to regulate the “TNC” businesses. The legislation doesn’t prescribe how that will be managed nor how much it will cost. By the way, did you notice that the legislation for Uber includes a lighted icon in the front and rear of the car to identify it? That’s as much for passenger safety as it is to make it easier for the T&LC police to pull over the cars if the legislation doesn’t pass. Not much likelihood of that, though, given the amount of money changing hands in Albany.

Long story short – Uber is more about business processes than it is about new applications.

Past Technology Evolution Examples

Going back to the graph, there is much to learn from prior experiences of the Fortune 500 and government agencies introducing new technology.

Learn from the Fortune 500 – the good:

Benefits processing: Hewitt Assoc and Fidelity continuously advance their capabilities. They provide integration with employer payroll systems. They have up to the minute accuracy of consumer records. They provide immediate access to Accruals and eligibility. They’ve adopted web and mobile technologies as Systems of Engagement, including biometric security authentication.

Claims processing: Travelers Insurance has historically reduced IT and people expense 10% annually while improving response times. Claims agents leverage mobile technology for accidents and disasters as input to “legacy” systems.

Learn from the government – the good:

The FBI and VA leverage mainframe virtualization to avoid IT costs of millions of dollars over commodity systems, while improving security, resilience and service level agreements. They run the same code in a different container with a superior operations model and lower costs.

All of the above use Hybrid technology which includes the mainframe.

Learn from the government – the bad:

Marine Corps – hosted by an IT supplier that gouges them on mainframe costs – three times the amount if they hosted it themselves. The IT supplier takes floor space, energy and cooling costs for an entire data center and only bills to the mainframe users. The IT group claims: Commodity systems wouldn’t be affordable if they were “taxed” with those costs. That’s why understanding the Total Cost of Ownership is a critical success factor when considering mainframe vs. commodity system costs. Unfortunately, regulations are in place that mandate that the Marine Corps use that particular IT Supplier. Other groups have bucked that policy to save money.

US Postal Service was not competitive with package tracking vs UPS and FedEx. They realized they needed to add new applications and wanted modern programming to do it. It included new engagement systems at the delivery vehicles via mobile technology. ….that’s the good. The bad – they spent $100’s of millions on redundant “commodity” IT infrastructure and copied key data and applications from the mainframe in order to host the new applications, while leaving the mainframe running. Testing and benchmarking have demonstrated that adding the new applications to the existing mainframes would have avoided millions in costs and operations complexity, while simplifying the architecture and improving SLA’s. With package shipping volumes increasing annually, they’ve continued to upgrade the mainframe each year. They are just spending too much overall. While they collaborate between the systems by moving data, they could save more if they shared the data in real-time.

Prescription for change

While a prescription for change is forth coming in the CIO’s future blogs, let’s hypothesize some changes for their benefit.

Modernization of the development environment

Rational tools – They move the mainframe application development to commodity systems. This moves 80% of the development off the mainframe to reduce IT costs. They provide tools to modernize and document the “legacy” applications and simplify their maintenance. They provide seamless test to the mainframe and other platforms of deployment choice. One large business has 1000 Java developers for commodity systems, 400 Cobol programmers for the mainframe and 50 developers familiar with Java and Cobol to enable hybrid programming and integration. All use the same Rational development front end. From a skills perspective, the mainframe development can now look and feel exactly the same as development on commodity systems. This eases the skills and knowledge requirements to start.

Language modernization:

Cobol Copybooks – the means to define data structures – are now sharable with web services and those services  can launch from Cobol. More on that in a moment.

Chip Speed

The System z13 server runs dual core 5GHz processors. Benchmarks show that Java runs faster here than any other platform. The video referenced earlier provides specifics. With direct access to databases and files, business applications can have better performance than other architectures. With fault tolerance and an improved hardware and software security architecture, the result is a very price competitive hosting environment for new workloads.

Risk and Fraud analytics

Financial services businesses are doing real-time analytics in the middle of their System of Record transaction programs to assess risk and avoid fraud. Leveraging the Copybook capability, they can call out to leverage the 1000+ processors in the IBM Data Analytics Accelerator (IDAA – formerly Netezza) that have been tied into the mainframe to speed time to resolve.

Callsign – a biometric authentication and fraud prevention technology, can leverage a modern smart phone to identify the owner/user of the device before they actually answer a challenge – which could be a finger print, facial recognition or voice. Using the accelerometer in the phone, the GPS and pressure points on the touch pad, along with historic behavior patterns, Callsign can tell by the way a person is holding a phone if it’s the original user or someone else before offering them the authentication challenge. This type of technology can be used at kiosks in regional/branch offices to enroll users and make sure they are the real person requesting later service. No need for a card. A unique user id is sufficient to provide authentication. True, many low-income users/beneficiaries may not have smart phone capability. Alternative mechanisms can be deployed for challenge/response authentication. But, maybe providing a low-cost device to beneficiaries for this purpose, a more modern version of the “RSA token devices”,  might reduce overall costs for low-income users. Watch this space. One of the Callsign customers, a large credit card processing bank, is calling out to Callsign from a “legacy” mainframe transaction program to authenticate that the real customer is at the point of sale or ATM device requesting service. Compare that to an experience I had recently. Visiting 500 miles from home, I went to a big box department store and paid with a valid credit card. Everything was good, but the transaction was denied. I then used a debit card, same bank, same credit card service, but used my pin code. The transaction was approved. As I walked out of the store, I got a call from the credit card provider asking me if I just attempted to use the card. They restored my card to service immediately. Use of the Callsign capability eliminates the human intervention, lowers my embarrassment and speeds transaction processing.

Going a step further, Callsign runs on Amazon Web Services (AWS) or a private cloud today. This is a distributed connection to the transaction systems calling out to it. There are about 15 “risk tests” that can be done, but typically just three can be done and the results fed back to make a risk decision in the time allowed for a transaction to complete. We’ve hypothesized that if Callsign was running on a mainframe, with a memory connection to the transaction programs, that 10 risk tests could be done on the mainframe and maintain the service level agreement of the “legacy” transaction programs. Stay tuned for future updates in this pace.

The NSA has proven that leveraging a Google like search capability can help stop attacks. Why not use web crawling software to look for fraud and overpayments? Leveraging online obituary information, an insurance company or benefits providers could determine if a person has died and no longer eligible for services. In addition, it can predict the services that may be available to the survivors of that person. This can speed up time to deploy payments to their survivors. These web crawlers can feed a data warehouse searching for fraud but also feed real-time systems to avoid fraud for new transactions.

Collaboration is necessary to move forward:

Education: partnerships between vendors, businesses/agencies and schools is necessary to create the next generation of IT professionals (programmers and operations) as well as to update the skills of existing personnel.

Operations: Today, fiefdoms around individual architectures or administrative domains exist that create/foster conflict and drive up IT costs. Not everyone is going to get along. Organizational politics and budgets have as much to do with fiefdoms as anything. Leveraging the Rational developer example, where a small group of people have some hybrid responsibility, can lead to breakthroughs in processing schemes.

Legislation: Where necessary, this can be valuable to enable a leap toward something new that will provide value and reduce costs.

Summary

There is no right or perfect answer to any IT decision. As the saying goes and leading to an unintended consequence: “Throwing the baby out with the bathwater” isn’t necessarily a good approach. Leveraging a hybrid computing, operational and development environment can make a large shift toward leveraging “modern” application models. Happy programming!

Secure Hybrid Computing – White paper

I’ve written several times about Hybrid Computing, data patterns, reducing costs, etc. Well, it’s all wrapped up in a White paper now, available at this site. HybridComputing.CaseStudy.Whitepaper.V1

Readers of this blog know that it is mainframe centric. They might be surprised that there is no mainframe nor, for that matter, any other platform mentioned in the paper.  Why?  It’s not about technology.  It’s about business.  Saving money and growing profits.  If that doesn’t get attention, I don’t know what will.   When systems are built together collaboratively, instead of fiefdoms, any business can save on a wide variety of server platforms.

In the paper, you’ll see that IBM and Vicom Infinity are working together with infrastructure that can be used to demo or develop proof of concepts to put these ideas to work. Don’t hesitate to reach out to me for additional information.

So get started thinking differently.  You can start with this blog and white paper. Happy programming. HybridComputing.CaseStudy.Whitepaper.V1

Apple and IBM Mobile solution deal could be a boon to z/OS as well

IBM has announced their strategy is around Big Data Analytics, Cloud and Mobile Computing. Apple has made an art of mobile computing through it’s iOS devices and App Stores. It’s clear that IBM wants to capitalize on that success and Apple wants to further entrench itself in the enterprise.

Going back in time, it’s not the first collaboration between IBM and Apple. The most famous relationship was Apple’s use of the PowerPC chip in earlier Macintosh computers.

Lessor known was a consortium, led by Oracle and including IBM, Apple, Sun, Alcatel and Netscape to promote the Network Computer Reference Profile, which would now be considered a Thin Client computer. This was circa 1996. This consortium was created to help attack the “lock” that Windows and Intel had on corporate desktops. The Network Computer was to be a less expensive alternative to PC’s and be centrally managed.
I had the pleasure of demonstrating a Thin Client connecting to an IBM mainframe at the launch of the reference profile at the Moscone Center in San Francisco. The original IBM thin client was developed with the AS/400 organization (now System i) in Rochester, MN. They really understood the value and importance of these devices as the 5250 was just recently declared a “dead technology” by IBM and the AS/400 needed a new connection device. I was enamored (and still am) by the ability of a thin client to provide the best type of access to a server, with graphics and multi-media support, but leaving the device as stateless. In other words, no data is downloaded to the device so there is less risk of loss or mis-use of corporate data. I worked closely with Apple to ensure they had a 3270 emulator available for businesses to connect via Macintosh desktops.

Ultimately, this consortium failed. The chips were too slow. It was before Linux was becoming ubiquitous for embedded devices. There were too many embedded home grown operating systems that weren’t extendable nor easy to update with the latest web browsers and terminal emulators. They were probably ahead of the curve. Now, Thin Client computers are affordable an an attractive alternative to PC’s. I’ll be writing about mainframe possibilities in a future post.

Back to IBM and Apple. It may seem odd, but there’s a link to IBM’s mainframe operating system, z/OS as well. z/OS began in the 1970’s as the MVS operating system. IBM sold a base operating system and then a collection of piece parts on top of it: storage management, communications, transaction processing, databases, compilers, job schedulers, etc. These products were individually purchased, managed and operated. There was quite a collection of support personnel required back then. In the 1990’s, along came OS/390 and now its successor, z/OS. The same products are now integrated into a single system. This simplified installation, customization, ordering and management. It reduced problem resolution and improved up time. It has been a predictable schedule from IBM that demonstrated business value in each release. In many cases, the cost for a release remained the same or less than the prior release, to ease adoption and provide the customer with a technology dividend.

Let’s compare that to the evolution of Apple’s MacOS. There’s an awful lot of function in a single operating system image as well. At the kernel level, it can process the bash shell and look a lot like a Linux system. There is a tremendous amount of middleware shipped at no additional charge with the system. Recently, I was trying to run FTP and some other basic, UNIX-like, commands on Microsoft Windows. I found that if the code was available, it was not comparable and would require buying additional software to get a comparable level of support to the Mac. At the end user interface, there is a tremendous amount of functionality. The installation of new applications is simple. The systems management, at a high level, is fairly simple, but can be very complicated when it needs to be for advanced users. Apple’s iOS mobile system inherits many of these characteristics as well, but on a different chip, the ARM chip rather than the  64 bit x86 Intel chip. The ARM chip is a topic for another day.

Both z/OS and MacOS get labeled as expensive, out of the box, when compared to other systems. Each suffer from a lack of understanding of the true cost of ownership for their respective hardware and software systems. Both have excellent security reputations. Both run at higher levels of availability than the systems with which they are being compared. Looking inside those statements, a business must recognize that there is certainly an element of technology applied to those values, but also people and processes. Neither system is hacker proof, nor truly fault tolerant. However, through their wise use of technology, including close collaboration between hardware and software development and automation of processes, each of these systems can handle difficulties or risks that other “comparable” systems cannot handle.  It’s always been hard to put a price on “down time” or avoiding planned and unplanned outages. While it’s true that some folks complain about the skills set necessary to operate a mainframe is “different” than other platforms, there are generally fewer people required to manage the aggregate amount of work running on a mainframe than the comparable amount of work running on other systems. The reality is people can learn to work with anything, as long as they are allowed.

One of the biggest differences between the IBM mainframe and Apple systems is that IBM no longer supplies the end user device necessary to operate and work with the mainframe. There is no “face” to the mainframe. Originally, the mainframe was accessed by a 3270 “green screen” command line terminal, or worse, punch cards. Now 3270 emulators are available on just about every “smart device” there is. The 3270 interface remains a boring command line interface out of the box. Graphics and multimedia support, as well as touch pads and speech recognition are now state of the art for end user computing. z/OS needs an upgrade in that arena. New technologies, such as the z/OS Management Facility, have built in web access to do a variety of functions. IBM and independent software vendors have taken advantage of this new capability to further improve and reduce the skills necessary for the management experience of z/OS.

There can be even greater synergy, in the areas of transaction pricing and application development.

Back in May, IBM announced new mobile workload pricing for z/OS. Given the volume of new mobile applications and the fact that many of them might be just a query (e.g. Account balance, order tracking), and not revenue generating to a business, IBM is offering discounted pricing and lower measured usage pricing for these types of transactions. This could be a significant benefit to existing customers and future transactions that are developed to exploit the mobile technologies. The measurement tool began shipping on June 30th.

The IBM Rational organization, responsible for application development tools, acquired Worklight, a product tailored to enable rapid deployment across a variety mobile computing operating systems with a single code base. Now integrated with the suite of Rational products, businesses can build applications that support back end transaction processing and database serving on the mainframe, while at the same time, providing new end user accesses and experiences from a wide range of smart computing devices, including phones, tablets and desktops.

The only thing to stop this integration of the best of both worlds might be a business itself. By isolating development, deployment and operational teams by platform silos e.g. mainframe vs. Unix. vs. PC’s vs. BYOD, a company could actually have the unintended consequence of blocking integration and overall improvement of their business’ end user experience both internally and for their customer’ experiences.

Here’s to hoping that the Apple – IBM mobile collaboration is significant enough to get businesses to open up their apertures so that true integration of systems can occur, from the end device all the way through to their business critical systems. The reality becomes the end to end workflow is their mission critical business, not just one server or device vs. another. When fewer systems are involved in the overall integration of business processes, a business should see benefits in their overall security, availability and easier compliance toward business initiatives. Happy programming.