Red Hat Knows OpenStack
Clips of some of my interviews from the OpenStack PTG last week. Many more to come.
Quelle: RDO
Clips of some of my interviews from the OpenStack PTG last week. Many more to come.
Quelle: RDO
In this briefing, Apurva Davé of Sysdig gave a great demo-rich presentation to best practices for monitoring for containers running in Kubernetes clusters on OpenShift.
Quelle: OpenShift
In this briefing, Harold Wong of Microsoft gives a quick introduction to Microsoft Azure’s QuickStart Templates and then dives into using them to deploy and configure OpenShift on Azure.
Quelle: OpenShift
In this briefing, Red Hat’s Hardy Ferentschik and Lalatendu Mohanty give us an overview on deploying and using MiniShift and talk a bit about the road ahead for both MiniShift and MiniKube.
Quelle: OpenShift
As a follow on to my previous post, I thought it would be interesting to describe how to live debug a Java application running on OpenShift.
Quelle: OpenShift
When learning about new technologies and tools, it often helps to get one’s hands just a little bit dirty and see what really makes them work.
That’s the idea behind the new Bootcamp labs at InterConnect 2017. These instructor-led labs will run three to four hours, providing enrollees the opportunity to do hands-on work with new products and technologies. Attendees can find a deeper dive in these sessions led by subject matter experts.
Here are the topics for all six Bootcamp labs:
1. Microservices-based application development mini-Bootcamp
This lab walks attendees through implementation of an application conforming to a microservice-based architecture. Attendees work with a microservice-based application in Bluemix, implement fabric components, and define backends for frontends (BFF) and API components. The application is built and deployed using a custom script to minimize errors while still allowing developers to oversee the implementation, performing a step-by-step evaluation of the architecture.
2. WebSphere 7 and 8 end-of-service and migration to WebSphere 9 and the cloud: Tools, tips and tricks lab
With the WebSphere 7 end-of-service announcement, this lab concerns what is involved in migrating applications to WebSphere 9, Liberty and the cloud. It covers best practices, steps and tools to assist in migrating the application server to WebSphere 9 and IBM Bluemix. Tools discussed include:
The WebSphere Migration Discovery tool
The WebSphere Binary scanner, which assesses complexity of applications and runtimes
The WebSphere Application Migration toolkit, which helps fix potential problems with code migration
WebSphere Configuration Migration toolkit, which is used to extract and move a configuration
3. DevOps and CSMO mini-Bootcamp
Learn how to get development and operations more tightly integrated in Bluemix by coupling Bluemix capabilities with cloud service management and operations. Explore some of the development, monitoring and operations tools in Bluemix. Better understand service-management issues for hybrid cloud applications versus private cloud applications. See why the concept of “build to manage” is especially relevant in a cloud environment. The intended audience is anyone who has an interest in Bluemix or Service Management and wants to see how the two worlds come together.
4. Event-driven and serverless computing with IBM Bluemix OpenWhisk
In this Bootcamp lab, attendees can explore the design and implementation of applications using event-driven and serverless technologies. They can learn compose and wire together microservice actions in response to events generated by humans as well as machines.
5. The practices of the Bluemix Garage developer: Extreme programming (for non-programmers)
This workshop immerses attendees who do not develop software into “extreme programming,” the flagship practice of the IBM Garage Method. Through group workshop activities, attendees can experience the ebb and flow of Bluemix Garage development cycles. They will try out pair programming, test-driven development, merciless refactoring and evolutionary architecture. Take away an appreciation of the rigor of extreme programming and learn why it makes the IBM Garage Method work.
6. Platform to Maximo/TRIRIGA hands-on lab
This lab offers attendees a basic understanding of how connected operations work. Use a simulated temperature sensor (a gauge meter in Maximo) to send a temperature reading to the Internet of Things (IoT) Quickstart. The message is then sent to NODERED, which parses the message. When a reading changes, it goes into a RESTAPI call that inserts the meter reading into the referenced asset meter readings. The reading updates the measure point and triggers a work order using Maximo’s inherit functionality or, if one elects to do the exercise using TRIRIGA, a work task.
You can see the detailed agenda of Bootcamp labs and enroll by using the IBM Events mobile app or InterConnect Session Expert. Attendees must enroll to secure seats in these sessions, so early enrollment is strongly suggested.
Follow @IBMCloudEdu to get the latest on boot camps and the InterConnect Hands-on Lab Center and don’t forget to register today and enroll for your Bootcamp labs.
The post 6 IBM InterConnect Bootcamp labs developers shouldn’t miss appeared first on #Cloud computing news.
Quelle: Thoughts on Cloud
The post How to avoid getting clobbered when your cloud host goes down appeared first on Mirantis | Pure Play Open Cloud.
Yesterday, while working on an upcoming tutorial, I was suddenly reminded how interconnected the web really is. Everything was humming along nicely, until I tried to push changes to a very large repository. That’s when everything came to a screeching halt.
“No problem,” I thought. &8220;Everybody has glitches once in a while.&8221; So I decided I&8217;d work on a different piece of content, and pulled up another browser window for the project management system we use to get the URL. The servers, I was told, were &8220;receiving some TLC.&8221;
OK, what about that mailing list task I was going to take care of? Nope, that was down too.
As you probably know by now, all of these problems were due to a failure in one of Amazon Web Services&8217; S3 storage data centers. According to the BBC, the outage even affected sites as large as Netflix, Spotify, and AirBnB.
Now, you may think I&8217;m writing this to gloat — after all, here at Mirantis we obviously talk a lot about OpenStack, and one of the things we often hear is &8220;Oh, private cloud is too unreliable&8221; &8212; but I&8217;m not.
The thing is, public cloud isn&8217;t any more or less reliable than private cloud; it&8217;s just that you&8217;re not the one responsible for keeping it up and running.
And therein lies the problem.
If AWS S3 goes down, there is precisely zero you can do about it. Oh, it&8217;s not that there&8217;s nothing you can do to keep your application up; that&8217;s a different matter, which we&8217;ll get to in a moment. But there&8217;s nothing that you can do to get S3 (or EC2, Google Compute Engine, or whatever public cloud service we&8217;re talking about) back up and running. Chances are you won&8217;t even know there&8217;s an issue until it starts to affect you &8212; and your customers.
A while back my colleague Amar Kapadia compared the costs of a DIY private cloud with a vendor distribution and with managed cloud service. In that calculation, he included the cost of downtime as part of the cost of DIY and vendor distribution-based private clouds. But really, as yesterday proved, no cloud &8212; even one operated by the largest public cloud in the world &8212; is beyond downtime. It&8217;s all in what you do about it.
So what can you do about it?
Have you heard the expression, &8220;The best defense is a good offense&8221;? Well, it’s true for cloud operations too. In an ideal situation, you will know exactly what&8217;s going on in your cloud at all times, and take action to solve problems BEFORE they happen. You&8217;d want to know that the error rate for your storage is trending upwards before the data center fails, so you can troubleshoot and solve the problem. You&8217;d want to know that a server is running slow so you can find out why and potentially replace it before it dies on you, possibly taking critical workloads with it.
And while we&8217;re at it, a true cloud application should be able to weather the storm of a dying hypervisor or even a storage failure; they are designed to be fault-tolerant. Pure play open cloud is about building your cloud and applications so that they&8217;re not even vulnerable to the failure of a data center.
But what does that mean?
What is Pure Play Open Cloud?
You&8217;ll be hearing a lot more about Pure Play Open Cloud in the coming months, but for the purposes of our discussion, it means the following:
Cloud-based infrastructure that&8217;s agnostic to the hardware and underlying data center (so it can run anywhere), based on open source software such as OpenStack, Kubernetes, Ceph, networking software such as OpenContrail (so that there&8217;s no vendor lock-in, and you can move it between a hosted environment and your own) and managed as infrastructure-as-code, using CI/CD pipelines, and so on, to enable reliability and scale.
Well, that&8217;s a mouthful! What does it mean in practice?
It means that the ideal situation is one in which you:
Are not dependent on a single vendor or cloud
Can react quickly to technical problems
Have visibility into the underlying cloud
Have support (and help) fixing issues before they become problems
Sounds great, but making it happen isn&8217;t always easy. Let&8217;s look at these things one at a time.
Not being dependant on a single vendor or cloud
Part of the impetus behind the development of OpenStack was the realization that while Amazon Web Services enabled a whole new way of working, it had one major flaw: complete dependance on AWS.
The problems here were both technological and financial. AWS makes a point of trying to bring prices down overall, but the bigger you grow, incremental cost increases are going to happen; there&8217;s just no way around that. And once you&8217;ve decided that you need to do something else, if your entire infrastructure is built around AWS products and APIs, you&8217;re stuck.
A better situation would be to build your infrastructure and application in such a way that it&8217;s agnostic to the hardware and underlying infrastructure. If your application doesn&8217;t care if it&8217;s running on AWS or OpenStack, then you can create an OpenStack infrastructure that serves as the base for your application, and use external resources such as AWS or GCE for emergency scaling &8212; or damage control in case of emergency.
Reacting quickly to technical problems
In an ideal world, nobody would have been affected by the outage in AWS S3&8217;s us-east-1 region, because their applications would have been architected with a presence in multiple regions. That&8217;s what regions are for. Rarely, however, does this happen.
Build your applications so that they have &8212; or at the very least, CAN have &8212; a presence in multiple locations. Ideally, they&8217;re spread out by default, so if there&8217;s a problem in one &8220;place&8221;, the application keeps running. This redundancy can get expensive, though, so the next best thing would be to have it detect a problem and switch over to a fail-safe or alternate region in case of emergency. At the bare minimum, you should be able to manually change over to a different option once a problem has been detected.
Preferably, this would happen before the situation becomes critical.
Having visibility into the underlying cloud
Having visibility into the underlying cloud is one area where private or managed cloud definitely has the advantage over public cloud. After all, one of the basic tenets of cloud is that you don&8217;t necessarily care about the specific hardware running your application, which is fine &8212; unless you&8217;re responsible for keeping it running.
In that case, using tools such as StackLight (for OpenStack) or Prometheus (for Kubernetes) can give you insight into what&8217;s going on under the covers. You can see whether a problem is brewing, and if it is, you can troubleshoot to determine whether the problem is the cloud itself, or the applications running on it.
Once you determine that you do have a problem with your cloud (as opposed to the applications running on it), you can take action immediately.
Support (and help) fixing issues before they become problems
Preventing and fixing problems is, for many people, where the rubber hits the road. With a serious shortage of cloud experts, many companies are nervous about trusting their cloud to their own internal people.
It doesn&8217;t have to be that way.
While it would seem like the least expensive way of getting into cloud is the &8220;do it yourself&8221; approach &8212; after all, the software&8217;s free, right? &8212; long term, that&8217;s not necessarily true.
The traditional answer is to use a vendor distribution and purchase support, and that&8217;s definitely a viable option.
A second option that&8217;s becoming more common is the notion of &8220;managed cloud.&8221; In this situation, your cloud may or may not be on your premises, but the important part is that it&8217;s overseen by experts who know the signs to look for and are able to make sure that your cloud maintains a certain SLA &8212; without taking away your control.
For example, Mirantis Managed OpenStack is a service that monitors your cloud 24/7 and can literally fix problems before they happen. It involves remote monitoring, a CI/CD infrastructure, KPI reporting, and even operational support, if necessary. But Mirantis Managed OpenStack is designed on the notion of Build-Operate-Transfer; everything is built on open standards, so you&8217;re not locked in; when you&8217;re ready, you can take over and transition to a lower level of support &8212; or even take over entirely, if you want.
What matters is that you have help that keeps you running without keeping you trapped.
Taking control of your cloud destiny
The important thing here is that while it may seem easy to rely on a huge cloud vendor to do everything for you, it&8217;s not necessarily in your best interest. Take control of your cloud, and take responsibility for making sure that you have options &8212; and more importantly, that your applications have options too.
The post How to avoid getting clobbered when your cloud host goes down appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis
If you’re thinking about adopting hybrid cloud, you’re probably also thinking about cost.
Every day, I speak to IT architects who are rethinking their #cloud computing strategies. Questions about cloud cost are usually at the tops of their minds.
One big advantage of working for a company like IBM is access to the latest market research. Independent reports by organizations such as Aberdeen are a great place to start reviewing how much to expect a hybrid cloud investment to cost.
No reason to fear cloud costs
In a recent analysis of how companies combine virtualization and public cloud, Aberdeen found that some IT architects are discouraged from executing a hybrid strategy because of uncertainties about costs.
These fears are misplaced. When designed correctly, the cost savings of implementing a hybrid cloud can quickly repay an initial upfront investment. Aberdeen shows that businesses using hybrid cloud are 38 percent more likely to see a reduction in overall IT expenditure.
What matters is getting the hybrid cloud blend right. For that, an organization must have a realistic sense of how its requirements can be met to deliver the best value for its environment.
Best, not cheapest
When weighing infrastructure, virtualization and hybrid options, customers are often tempted to follow the lowest cost projections, but cloud costs should always be evaluated in context. IT architects know one requirement should be considered particularly carefully: integration.
A hybrid cloud strategy is built on partnerships. The infrastructure and virtualization organizations rely on must work together to deliver to deliver the required performance while meeting projected cost and efficiency savings. The best solutions usually rely on collaboration between multiple vendors and offerings.
Check that your preferred vendors work well together first to avoid disruption later.
The real cost
Like all investments, hybrid cloud should be assessed in terms of value rather than cost. Upfront costs are only the beginning of a hybrid cloud story. Once a solution is implemented, the performance clock starts ticking. A suspiciously low price point will soon start to look expensive if reliability or security complications arise.
IBM offers the full range of infrastructural and platform solutions and integration skills, so we can afford to talk openly with customers about all their options. Over the last few years I’ve found that more and more customers are best served by the flexibility and pay-as-you go pricing of truly hybrid partnerships such as VMware on IBM Cloud.
It’s critical that this decision-making process is built on the solid foundations of data. That’s where reports by bodies such as Aberdeen and Frost & Sullivan are invaluable. Before you make the leap to hybrid, read these reports to help assess what you can help your organization to save. Or send a quick email and I’ll give you a call. The IBM Cloud team is here to consult.
Read the full report now.
The post How much should hybrid cloud cost? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud
If you know anything about cloud or cognitive computing, you know that every industry will be affected by these advances in technology. Either you’ll embrace them or you will fall behind. It’s time to take the lead. Visit the Industry Zone at IBM InterConnect. You’ll hear from experts on cloud in finance, healthcare and more and get a chance to experience cognitive first hand. Join us and be inspired. Bring the latest thinking back to your organization. Take a look at four of my favorite demos.
1. Cognitive and security
As the world becomes more connected, the threat of cybercrime looms large. How will you protect your organization? Witness a state of the art security operations center during an attempted cyberattack against a financial institution. This virtual reality experience will let you observe your own team of cyber professionals as they detect and combat an active attack against your payment infrastructure. Learn how groundbreaking developments such as Watson for Cyber Security and IBM QRadar combine to defend against this growing threat.
2. Healthcare and cloud
Cloud and cognitive technology have the potential to make sense of data and solve tough healthcare challenges. Learn how one company is providing media-delivered therapies that target age-related behavioral symptoms. SimpleC uses individualized, drug-free therapies to improve well-being and independence through research-based techniques.
If you’re passionate about healthcare, you’ll also have to catch the IBM demonstration on how Watson Health Platform will combine with cloud to enable new possibilities.
3. A look at the future of retail
There’s no denying that consumer behaviors are changing. Retailers must respond by finding new ways to understand and engage customers in this digital world. By combining data from your store’s devices with external data and cognitive analytics, Watson IOT can provide new methods to reduce costs, improve the store experience and generate new revenue. Learn how IBM recommends specific tasks and enables process automation to achieve these benefits. It’s time to bring the future of technology to your customers and staff.
4. Solutions for financial services
Learn how IBM is helping financial institutions increase client engagement and operational agility while reducing regulatory burden and addressing risk. See demonstrations on how factors such as IBM Bluemix, hybrid cloud, IBM Cloud Managed Services, security, IoT, blockchain and cognitive can transform the financial services industry and help you find an edge.
If you are intrigued by these demos, understand this is only the tip of the iceberg. InterConnect will bring thousands of sessions, trainings and networking opportunities to you and your organization. If you still haven’t signed up, be sure to register today.
The post Industry Zone at Interconnect: latest in AI, blockchain and IoT appeared first on #Cloud computing news.
Quelle: Thoughts on Cloud
Are you looking to transform your IT department into a self-service delivery center? Do your IT operations have the speed and control to deliver what’s needed without compromising quality?
Keep reading to find out how KPN, an IT and communications technology services provider, increased its speed to quickly deliver IT service requests, reduce costs and provide high quality cloud services.
KPN is a leader in IT services and connectivity. It offers fixed-line and mobile telephony, internet access and television services in Netherlands. The provider also operates several mobile brands in Germany and Belgium. Its subsidiary, Getronics N.V., provides services across the globe.
Data and storage have played a critical role in helping KPN deliver high quality cloud services to its clients. As rapid growth of data continues to change the game, here’s how this savvy business has used IBM Cloud to transform operations.
Cloud Orchestrator accelerates service delivery
KPN executives wanted to optimize its cloud strategy to enhance service delivery time and quality. Potential solutions would help them manage and automate storage services in-house. The goal: improve cloud management to accelerate service delivery and reduce costs without sacrificing quality.
IBM Cloud Orchestrator (ICO) is an excellent solution for managing your complex hybrid cloud environments. It provides cloud management for IT services through a user-friendly, self-service portal. It automates and integrates the infrastructure, application, storage and network into a single tool. Additionally, the self-service catalog lets users automate the deployment of data center resources, cloud-enabled business processes and other cloud services.
Business transformation through automation
With ICO, KPN automated its storage services and designed an in-house cloud management system. The solution helped KPN provision and scale cloud resources and reduce both administrator workloads and error-prone manual IT administrator tasks. As a result, KPN could accelerate service delivery times by approximately 80 percent. This significantly improved the service quality and saved resources through automation.
Watch this video to learn more about how IBM Cloud Orchestrator helped KPN accelerate its cloud service delivery:
For a more in-depth discussion, join us at InterConnect 2017 and attend the session: “How KPN leveraged IBM Cloud technologies for automation and “insourcing” of operations work.” And there’s more. InterConnect will bring together more than 20,000 top cloud professionals to network, train and learn about the future of the industry. If you still haven’t signed up, be sure to register now.
The post How KPN speeds service delivery appeared first on #Cloud computing news.
Quelle: Thoughts on Cloud