A partnership for growth and innovation

Building on a history of collaboration between the two companies, HCL and IBM have entered into a 15-year partnership that takes the best of their shared knowledge and teaming experience to build industry-leading experience enhanced, Automation and DevOps solutions.
Automation and DevOps software are crucial to the success of their digital enterprises. Most recognize these tools enable them to better react to today’s dynamic environment of new cloud platform and cognitive solutions. With automation, organizations can efficiently manage their workloads through planning, processing and analyzing of information technology events. DevOps is now an essential part of the software development process as it improves the speed, quality and predictability of development projects. Experience enhancement is the ability to modernize existing business processes to provider a richer and improved ergonomic workflow for organizations and their customers.
HCL and IBM recognize these needs and have committed to investing in and growing this portfolio of experience, automation, DevOps products, platforms and UI modernization/API/web service enablement for mainframes. We want our clients to succeed and are committed to empowering them to respond to this rapid pace of innovation.
So what does this partnership look like? Collaboration is key. HCL and IBM will work together on future product roadmaps for Tivoli Workload Scheduler, Rational Testing, Rational Change & Configuration Management (ClearCase, ClearQuest,  Rational Synergy and Rational Change), Rational Modeling & Construction (Rational Rose, Rational Software Architect and Rational Application Developer), Rational Business Developer (RBD), Rational Asset Manager, Rational Build Forge and Rational Automation Framework software for on–premises, hybrid and public cloud and software as a service (SaaS) platforms.  The same kind of focus will be applied to the roadmaps for the offering of Host Access Client Package (HACP) and the UI modernization & web service enablement offering of Host Access Transformation Solution (HATS), the problem determination capabilities of Fault Analyzer for z/OS, File Manager for z/OS, CICA Interdependency Analyzer for z/OS (CICS IA) , CICS Deployment Assistant for z/OS (CICA DA), CICS VSAM Recovery for z/OS (CICS VR), and Tivoli Asset Discovery for z/OS (TADz).
Clients who are using these products will continue to engage with IBM as the primary partner. HCL has licensed IBM technology and will build additional features and functionalities based on business realities and client priorities. Both companies will work to grow the market for these products.
This partnership will enable us to focus our technical teams on native cloud and cognitive solutions, as well as industry-specific opportunities. Development and support teams from IBM, who are expert in these products and offerings, will benefit from HCL’s strong engineering heritage and culture of empowerment. Together, we will continue to advance and cloud-enable existing software, which is essential to the long-term success of our clients.
Both companies have enjoyed a longstanding and successful partnership across technology, software, services, and most recently, IBM Bluemix cloud platform services. With a history of collaboration and innovation, we’re excited about the opportunity to jointly address our clients’ need for transformational leadership. There is no doubt that HCL + IBM is a powerful combination – and a big win for our clients.
The post A partnership for growth and innovation appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

IBM leadership in cloud: Building on a legacy of excellence

The market for cognitive insights delivered via cloud to specific industries is estimated to grow to roughly $2 trillion by 2025. Given this huge opportunity, I’m not surprised that some IBM competitors are realigning their sales forces to target industry verticals.
While there are several reasons why companies choose IBM over our competitors, we believe the most important reason is that IBM is built to support the enterprise. IBM has built its cloud with Watson to be a highly differentiated platform focused on the needs of businesses and industry verticals. Most of all, it is optimized from the chipset to serve the user experience. This allows clients to integrate their data from their existing systems with other kids of data in the cloud and uses Watson to make sense of it.
Moreover, the nearly 60 IBM cloud data centers in 19 countries – including four new centers announced this week – are incredibly important. The breadth and location of our data centers is critical, as global enterprises navigating the transition to the public cloud are increasingly subjected to many different regulatory requirements related to security, privacy, governance and other key issues.
Enterprise clients require – and the IBM platform uniquely provides – controlled access to their data sets. This includes data locality – transparency of where their data is, who is using it, what it’s being used for, and data isolation to ensure their data is not intermingled with others’ data.
IBM also has the industry expertise to meet strict regulations across various industries while creating custom solutions for its customers. For example:

The IBM Cloud is designed for industries with stringent regulatory requirements including HIPAA, GxP, and QMS.
For industries such as automotive, aircraft, and consumer electronics, the IBM cloud platform offers a half-a-petabyte a day to/from connected devices.
Today, IBM Cloud supports some of world’s most notable brands, including American Airlines, AT&T, Bitly, BMW, Bombardier, Maersk, Chubb, Clarient Global, Etihad, Kaiser Permanente, Lloyds Banking Group, Halliburton, Pratt & Whitney, Shop Direct, US Army and Wanda Group.

While other providers are just now reorganizing around industries, IBM is already well positioned in key sectors, delivering industry-specific cloud solutions to clients:
Education: IBM and Sesame Workshop are working with Georgia’s Gwinnett County Public Schools, one of the top urban school districts in the US, on an initial pilot of the industry’s first cognitive vocabulary learning app, built on the IBM and Sesame intelligent play and learning platform. The new platform, powered by IBM Cloud, enables an ecosystem of software developers, researchers, educational toy companies and educators to tap IBM Watson cognitive capabilities and Sesame Workshop’s early childhood expertise to build engaging experiences to help advance children’s education and learning.
Retail: IBM announced that it is a pilot partner of BMW CarData. BMW CarData will allow up to 8.5 million BMW customers globally to make use of third party services in a secure and transparent way. BMW is the first OEM to release an open data platform with the introduction of BMW CarData. As a pilot partner, IBM has integrated the IBM Cloud with the BMW CarData platform. Vehicle data will be enhanced by IBM Watson Internet of Things (IoT) using cognitive and data analytics services to enable third parties, such as automotive repair shops or insurance companies, to develop entirely new customer experiences.
Health: Memorial Sloan Kettering clinicians and analysts are partnering with IBM to train Watson Oncology to interpret cancer patients’ clinical information and identify individualized, evidence-based treatment options. Through this partnership, IBM and Sloan Kettering are working to decrease the amount of time it takes for the latest research and evidence to influence clinical practice across the broader oncology community care.
Manufacturing: Bombardier announced it is extending its long-term partnership with IBM through a new six-year deal valued at approximately $700 million, which includes IBM Services and IBM Cloud management of Bombardier’s worldwide IT infrastructure and operations.
Financial services: IBM signed a 10-year cloud services agreement with Lloyds Banking Group in the UK with a contract value of $1.69 billion. IBM will provide dedicated cloud offerings hosted securely in both Lloyds and IBM data centers in the UK and will manage the application migration services to their new cloud.
Transportation: American Airlines announced this quarter it will move to the IBM Cloud and use it as the foundation for its digital transformation. American will migrate critical applications including aa.com, its customer facing mobility app and global network of kiosks.
These are just a few of our client wins over the past quarter. As we continue to strengthen our position as the enterprise cloud leader, we remain committed to continuing to innovate and provide cutting-edge solutions based on emerging technologies across multiple professions.
Because every day IBM remains committed to creating new, customer-centric solutions that are enterprise strong, data first, and most importantly, cognitive at the core.

The post IBM leadership in cloud: Building on a legacy of excellence appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

How the new, next-gen IBM Z helps you get more from your cloud

Earlier this week, IBM pulled back the curtain on the new IBM Z mainframe. This is a dazzling piece of technology, designed to be the world’s most powerful transaction system. It includes impressive integration capabilities, leading encryption technology and astounding performance.
Sounds impressive, right? But what does all this have to do with cloud? The answer is a whole lot. I’ve been working in hybrid cloud for more than a decade, and I am excited about how IBM hybrid cloud capabilities can combine with this new technology to help your business achieve an additional edge against the competition.
Many of you already understand why a powerful mainframe is so important. But for those who don’t, here are three key ways that IBM Z can help you get the most out of your cloud.

Security: If you’ve paid attention to the news lately, you know that certain cloud providers have been having some security hiccups. Trust is crucial in a world where new threats are constantly emerging. Industry-leading encryption from IBM Z already protects the world’s leading banks, retailers and insurers. The latest version takes these trusted technologies and builds on them with substantial improvements to encryption and security. I’m particularly excited about how this technology can work with MQ to provide secure data connection for Blockchain and other interactions.
Blazing speed, flexibility and integration: Another benefit of hybrid cloud is flexibility. IBM Z caters to clients who need to move information to and from the cloud quickly and securely. Using IBM Z and Aspera software, clients will be able to perform transfers between IBM Z and cloud at maximum speed. Clients will also be able to use cloud integration tools to manage, publish and promote APIs created within IBM Z.
Unmatched performance: There are countless examples of how IBM Z can help you improve performance in your IT operations, so for now we’ll focus on one high-profile example. With WebSphere, Java workloads run even faster. This performance gain means that your organization can get the most out of tools from IBM cloud without losing the flexibility of a hybrid cloud solution.

These are only a few of the many exciting ways that IBM Cloud tools can build on the IBM Z launch to help improve your IT operations. In an increasingly competitive IT environment, moving to the cloud is no longer enough. It takes a combination of cloud, on premises and integration technology to build the hybrid solution your team needs to win.
To learn more about how to use new mainframe technology to get the most out of your cloud, visit the new IBM Z product page and stay tuned for the upcoming webinar series. If you have any questions about my team or IBM Cloud products, feel free the continue the conversation by leaving a comment or connecting with me on LinkedIn.
 
The post How the new, next-gen IBM Z helps you get more from your cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Moodpeek measures mobile reputation with a Bluemix-based PaaS solution

According to Mobile Business Insights, mobile devices are the primary means of accessing digital information for consumer and business use. How are mobile users accessing information? Ninety percent of the time, it’s through mobile apps. Smart Insights states that consumer preference for mobile apps rather than mobile sites should be considered when defining a mobile strategy.
What if a company’s reputation depended on its mobile applications?
Open created “Moodpeek,” a cloud-based data and analytics tool to answer this question.
Measuring mobile reputation
Open is a leading provider of digital and IT services, including application development, mobile services, infrastructure management and platform design. The company’s leaders believe that mobile applications are a brand’s digital showcase. According to Mobile Usage Barometer, In France today, 64 percent of mobile app users believe that app quality impacts brand image.
Thus, mobile reputation is a major concern, and the three most important factors that contribute to selling an app are word of mouth, comments and ratings.
Since there is no way to measure word of mouth, Moodpeek focuses on comments and ratings. It’s a unique and innovative way to evaluate, monitor and control the mobile reputation of a brand based on feedback from users.
Reviews in an app store are an inexhaustible source of feedback, which is far more important than any other opinion. The user is always right. App store comments are numerous, rich and varied. Mobile users say what they like, what they don’t like and the most important thing: what they would like to change.
Deeper insights with cloud services
Open’s original project in this area was called “Open Up.” It simply assessed app reviews to provide static reporting. The company had a bigger vision and wanted to deepen insight and enable access to information through an online dashboard.
To create an updated version of its app, Open developers knew they would need tools and cloud-based data services to support the company’s advanced analytics capabilities. Requirements included unlimited scalability, since the company plans to release the solution internationally.
Moodpeek was built on the IBM Bluemix cloud application development platform. Open favored IBM technologies because the company’s leaders are familiar with open source software for application development and are impressed with how extensive the Bluemix catalog is.
Additionally, Open wanted to work with a global leader in cloud and cognitive services, and in the Bluemix platform it saw the potential to build the Moodpeek application, and it has chosen Bluemix as its primary SaaS innovation platform to develop digital applications for its clients. Open has a great cloud partnership with IBM.
Moodpeek in action
Moodpeek helps many of Open clients in the French marketplace improve their mobile reputations. Moodpeek is already monitoring nearly 7,000 apps. It has analyzed approximately 6.5 million comments and close to 15 million ratings on app stores.

When a review of an app is written in the store, Moodpeek users see it half an hour later in the custom dashboard. Moodpeek customers can then determine user expectations and prioritize bug fixes, updates or adding new features. They can also watch competitors and figure out ways to differentiate.
Ultimately, listening to users gives a Moodpeek customer the opportunity to correct and perfect its apps, improve customer service and master its mobile reputation.

Learn more about IBM Bluemix.
The post Moodpeek measures mobile reputation with a Bluemix-based PaaS solution appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Tuning for Zero Packet Loss in Red Hat OpenStack Platform – Part 3

In Part 1 of this series Federico Iezzi, EMEA Cloud Architect with Red Hat covered the architecture and planning requirements to begin the journey into achieving zero packet loss in Red Hat OpenStack Platform 10 for NFV deployments. In Part 2 he went into the details around the specific tuning and parameters required. Now, in Part 3, Federico concludes the series with an example of how all this planning and tuning comes together!

Putting it all together …
So, what happens when you use the cpu tuning features?
Well, it depends on the hardware choice of course. But to see some examples we can use Linux perf events to see what is going on. Let’s look at two examples.
Virtual Machine
On a KVM VM, you will have the ideal results because you don’t have all of the interrupts from the real hardware:
$ perf record -g -C 1 — sleep 2h
$ perf report –stdio -n
# To display the perf.data header info, please use –header/–header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 100  of event ‘cpu-clock’
# Event count (approx.): 25000000
#
# Children      Self  Command  Shared Object      Symbol               
# ……..  ……..  …….  ……………..  …………………
#
  100.00%     0.00%  swapper  [kernel.kallsyms]  [k] default_idle
           |
           —default_idle
              native_safe_halt
  100.00%     0.00%  swapper  [kernel.kallsyms]  [k] arch_cpu_idle
           |
           —arch_cpu_idle
              default_idle
              native_safe_halt
  100.00%     0.00%  swapper  [kernel.kallsyms]  [k] cpu_startup_entry
           |
           —cpu_startup_entry
              arch_cpu_idle
              default_idle
              native_safe_halt
  100.00%   100.00%  swapper  [kernel.kallsyms]  [k] native_safe_halt
           |
           —start_secondary
              cpu_startup_entry
              arch_cpu_idle
              default_idle
              native_safe_halt
  100.00%     0.00%  swapper  [kernel.kallsyms]  [k] start_secondary
           |
           —start_secondary
              cpu_startup_entry
              arch_cpu_idle
              default_idle
              native_safe_halt
Physical Machine
On physical hardware, it’s quite different. The best results involved backlighting a bunch of ipmi and watchdog kernel modules:
$ modprobe -r iTCO_wdt iTCO_vendor_support
$ modprobe -r i2c_i801
$ modprobe -r ipmi_si ipmi_ssif ipmi_msghandler
Note: If you have a different watchdog than the example above, (iTCO is for Supermicro motherboards), check out the kernel modules folder where you can find the whole list: /lib/modules/*/kernel/drivers/watchdog/
Here’s the perf command and output for physical:
$ perf record -F 99 -g -C 2 — sleep 2h
$ perf report –stdio -n
# To display the perf.data header info, please use –header/–header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 4  of event ‘cycles:ppp’
# Event count (approx.): 255373
#
# Children      Self       Samples  Command  Shared Object      Symbol                                        
# ……..  ……..  …………  …….  ……………..  ……………………………………….
#
   99.83%     0.00%             0  swapper  [kernel.kallsyms]  [k] generic_smp_call_function_single_interrupt
           |
           —generic_smp_call_function_single_interrupt
              |          
               –99.83%–nmi_restore
   99.83%     0.00%             0  swapper  [kernel.kallsyms]  [k] smp_call_function_single_interrupt
           |
           —smp_call_function_single_interrupt
              generic_smp_call_function_single_interrupt
              |          
               –99.83%–nmi_restore
   99.83%     0.00%             0  swapper  [kernel.kallsyms]  [k] call_function_single_interrupt
           |
           —call_function_single_interrupt
              smp_call_function_single_interrupt
              generic_smp_call_function_single_interrupt
              |          
               –99.83%–nmi_restore
   99.83%     0.00%             0  swapper  [kernel.kallsyms]  [k] cpuidle_idle_call
           |
           —cpuidle_idle_call
              call_function_single_interrupt
              smp_call_function_single_interrupt
              generic_smp_call_function_single_interrupt
              |          
               –99.83%–nmi_restore
   99.83%     0.00%             0  swapper  [kernel.kallsyms]  [k] arch_cpu_idle
           |
           —arch_cpu_idle
              cpuidle_idle_call
              call_function_single_interrupt
              smp_call_function_single_interrupt
              generic_smp_call_function_single_interrupt
              |          
               –99.83%–nmi_restore
Using mpstat, and excluding the hardware interrupts, the results are as follows:
Please note: one CPU core per socket has been excluded – in this case using two Xeon E5-2640 V4.
$ mpstat -P 1,2,3,4,5,6,7,8,9,11,12,13,14,15,16,17,18,19,21,22,23,24,25,26,27,28,29,31,32,32,34,35,36,37,38,39 3600
Linux 3.10.0-514.16.1.el7.x86_64 (ws1.localdomain)      04/20/2017      _x86_64_     (40 CPU)

03:05:10 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
Average:       1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:       2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:       3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:       4    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:       5    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:       6    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:       7    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:       8    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:       9    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      11    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      12    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      13    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      14    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      15    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      16    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      17    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      18    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      19    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      21    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      22    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      23    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      24    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      25    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      26    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      27    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      28    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      29    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      31    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      32    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      34    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      35    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      36    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      37    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      38    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Average:      39    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
Cool, right? Want to know more about Linux perf events? Check out the following links:

Linux perf Examples – Read both the web page and the presentation, also listen to the youtube video
SCALE13x: Linux Profiling at Netflix
Tracing and Profiling – Yocto Project
Tutorial – Perf Wiki

Zero Packet Loss, achieved …
As you can see, using tuned and the cpu-partitioning profile is exceptional in that it exposes a lot of deep Linux tuning which usually only a few people know about.

And with a combination of tuning, service settings, and plenty of  interrupt isolations (over 50% of the total settings are about interrupt isolations!) things really start to fly.
Finally, once you make sure PMD threads and VNF vCPUs do not get interrupted by other threads allowing for proper CPU core allocation, the zero packet loss game is achieved.
Of course, there are other considerations such as the hardware chosen, the VNF quality, and the number of PMD threads, but, generally speaking, those are the main requirements.
Further Reading …
Red Hat Enterprise Linux Performance Tuning Guide
Network Functions Virtualization Configuration Guide (Red Hat OpenStack Platform 10)

Check out the Red Hat Services Webinar Don’t fail at scale: How to plan, build, and operate a successful OpenStack cloud today! 

The “Operationalizing OpenStack” series features real-world tips, advice and experiences from experts running and deploying OpenStack.
Quelle: RedHat Stack