3 ways to avoid failure in application deployment

In my previous post, I discussed six causes of system failures and their resulting negative impacts for enterprises. How can one one avoid failure?
1. If you have not yet built your DevOps practice, now is the time.
DevOps increases the feedback loop exchange from ideas to customer feedback. Automating the application release and deployment processes and using DevOps tools can drastically reduce the amount of manual intervention in the release pipeline, which leads to big gains in delivery such as faster resolution of problems, greatly reducing the complexity of problems to fix. For example, your team can use IBM Bluemix DevOps Services to develop, track, plan and deploy services in one place. This enables organizations to develop rapidly in an open, integrated environment that scales. It allows customer feedback to be easily incorporated into software delivery.
2. Implement shift left continuous testing.
Continuous testing helps organizations focus on error prevention instead of detection. It allows testing to occur earlier in the development and release cycle. Shift left reduces the delays and code reworking that occurs when major defects are discovered late in the testing cycle. It also shortens the feedback loop and speeds up the learning process.
But beware: testing must be comprehensive. With more frequent release cycles and shorter development periods, it is important that security testing — such as application vulnerability scanning — and performance testing are built into the release cycle. Security teams should be included as part of the DevOps lifecycle.
3. Operational resilience is crucial to avoid uncertainty and disruption.
The trend of digital transformation has changed the way organizations operate. Many businesses are heavily dependent on their IT systems. Consequently, the scope of operational resilience spans people, process and IT.
Organizations should define reliable and practical steps to enhance their IT operational resilience to improve their IT organization’s ability to rapidly adapt and respond to dynamic changes, opportunities, demands, disruptions and threats that could trigger application failures.
In addition to the above, the following approaches can be adopted to avoid failure:

Learning from failures is key to prevent future issues. DevOps supports continuous learning. It instills the attitudes and activities required to effectively detect and analyze failures.
Automating code testing and the provisioning of environments will reduce time spent on those two tasks and ensure that environments are based on the same configurations. Going beyond automation, consider orchestration, which is critical in today’s cloud landscape. Orchestration pulls together automation, integration and best practices to ensure smooth and rapid delivery. Using IBM Cloud Orchestrator and Urban Code Deploy, for example, gives organizations access to ready-to-use patterns that help accelerate configuration, provisioning and deployment. It reduces IT administrator workloads and manual tasks that can lead to errors.
Decomposing your monolithic applications into microservices simplifies the application as a whole and reduces the time it takes to find and fix failures. In addition, adopting a microservices architecture helps reduce the impact of failure in one microservice. It can be isolated from the other microservices in the system. Dependency will be less of a concern and rolling back changes is much easier than with monolithic designs.

Overcoming risk to avoid failure should be a priority. The ability to use patterns to define consistent environments eliminates the failures that occur through configuration inconsistencies. Operations and development must work together to create the provisioning process so developers create their test environments the same way they are created in production.
As the ADT study the points out, organizations increasingly look to continuous deployment to recover from failure. Technologies such as IBM UrbanCode Deploy can help transform the deployment process, while practices such as shift left and DevOps will increase the feedback loop and reduce failure.
Connect with IBM Cloud Advisor Osai Osaigbovo on LinkedIn.
The post 3 ways to avoid failure in application deployment appeared first on news.
Quelle: Thoughts on Cloud

How to run Rally on Packstack environment

Rally is a benchmarking tool that automates and unifies multi-node OpenStack deployment, cloud verification, benchmarking & profiling.
For OpenStack deployment I used packstack tool.

# Rally

[1.] Install rally:

$ sudo yum install openstack-rally

[2.] After the installation is complete set up the Rally database:

$ sudo rally-manage db recreate

# an OpenStack deployment

You have to provide Rally with an OpenStack deployment it is going to benchmark. To do that, we’re going to use keystone configuration file generated by packstack installation.

[1.] Evaluate the configuration file:

$ source keystone_admin

[2.] Create rally deployment and let’s name it “existing”

$ rally deployment create –fromenv –name=existing
+————————————–+—————————-+———-+——————+——–+
| uuid | created_at | name | status | active |
+————————————–+—————————-+———-+——————+——–+
| 6973e349-739e-41af-947a-34230b7383f8 | 2016-10-05 08:24:27.939523 | existing | deploy->finished | |
+————————————–+—————————-+———-+——————+——–+

[3.] You can verify that your current deployment is healthy and ready to be benchmarked by the deployment check command:

$ rally deployment check
+————-+————–+———–+
| services | type | status |
+————-+————–+———–+
| ceilometer | metering | Available |
| cinder | volume | Available |
| glance | image | Available |
| gnocchi | metric | Available |
| keystone | identity | Available |
| neutron | network | Available |
| nova | compute | Available |
| swift | object-store | Available |
+————-+————–+———–+

# Rally

The sequence of benchmarks to be launched by Rally should be specified in a benchmark task configuration file (either in JSON or in YAML format).
Let’s create one of the sample benchmark task, for example task for boot and delete server.

[1.] Create a new file and name it boot-and-delete.json

[2.] Copy this to the boot-and-delete.json file:

{% set flavor_name = flavor_name or “m1.tiny” %}
{% set image_name = image_name or “cirros” %}
{
“NovaServers.boot_and_delete_server”: [
{
“args”: {
“flavor”: {
“name”: “{{flavor_name}}”
},
“image”: {
“name”: “{{image_name}}”
},
“force_delete”: false
},
“runner”: {
“type”: “constant”,
“times”: 10,
“concurrency”: 2
},
“context”: {
“users”: {
“tenants”: 3,
“users_per_tenant”: 2
}
}
},
{
“args”: {
“flavor”: {
“name”: “{{flavor_name}}”
},
“image”: {
“name”: “{{image_name}}”
},
“auto_assign_nic”: true
},
“runner”: {
“type”: “constant”,
“times”: 10,
“concurrency”: 2
},
“context”: {
“users”: {
“tenants”: 3,
“users_per_tenant”: 2
},
“network”: {
“start_cidr”: “10.2.0.0/24″,
“networks_per_tenant”: 2
}
}
}
]
}

[3.] Run the task:

$ rally task start boot-and-delete.json

After successfull ran you’ll see information such as: Task ID, Response Times, duration, …
Note that the Rally input task above uses cirros as image name and ‘m1.tiny’ as flavor name. If this benchmark task fails, then the reason for that might be a non-existing image/flavor specified in the task. To check what images/flavors are available in the deployment you are currently benchmarking, you might use the rally show command:

$ rally show images
$ rally show flavors

More about Rally tasks templates can be found on Rally documentation
Quelle: RDO

The need for speed in IaaS platforms

When you look at the rapidly evolving Infrastructure-as-a-Service (IaaS) market, you might get the impression that it’s a race to the bottom with commodity hardware, inexpensive storage options, ongoing price cuts and the like.
If a cloud service provider can’t differentiate itself, things can get very ugly. That’s why one of the key tenets of IBM Cloud is delivering unprecedented speed and performance. Automating and delivering raw computing power from bare metal servers with cutting-edge GPUs (graphics processing units) can’t just be commoditized.
An example of this raw performance can be found in recent benchmark tests we conducted with our friends at MapD and Bitfusion. Together, we were able to scale up to 64 Tesla K80 GPUs across 32 servers to filter, query and aggregate a 40-billion-row dataset in just 271 milliseconds. To put that in context: together with our partners, we can scan 147 billion rows per second.

The ability to more easily pool and scale GPUs spanning multiple nodes into a single system is a significant breakthrough, maybe even a game changer. It enables businesses to manage the most complex, compute-intensive workloads — from deep learning and cognitive to big data analytics — using an affordable, on-demand computing infrastructure. The ability to explore data and run queries in near real time from the IBM Cloud gives users “supercomputer”-like performance.
Think of IBM Cloud as the Bloodhound (the project that aims to build a car that can break the world land speed record) of IaaS platforms. In many ways, IBM is trying to set its own, new cloud speed record by building the world’s fastest and most scalable cloud platform. IBM is aiming to democratize access to supercomputing resources via the cloud. Companies in industries including financial services, telecommunications, retail and social media gain a real, competitive advantage in being able to run queries like the ones we’ve tested.
Stay tuned. IBM has more HPC news coming.
The post The need for speed in IaaS platforms appeared first on news.
Quelle: Thoughts on Cloud

Continuous delivery tops IT execs’ priority list

Innovation is undoubtedly all around us, from advancements in AI to quantum computing. Who wouldn’t want to capitalize on the value of digital transformation?
But what’s truly driving these big moves? It’s due in no small part to the ability of IT organizations to speed software delivery.
Enterprise Management Associates’ latest research report, derived from a survey of 600 executives conducted in October 2015, lists best practices for DevOps and continuous delivery at high-performing companies. It likewise evaluates the role of automation and release management tools in promoting digital transformation.
According to the report, businesses are indeed making the connection between accelerated delivery of software services and business growth. In fact, they are overwhelmingly making “automation of the continuous delivery process” their top technology-related initiative for supporting digital transformation this year.
This sentiment bodes well for automated release management solutions such as IBM UrbanCode Deploy, which helps companies reduce — if not completely eliminate — the potential pitfalls typically associated with the software deployment process.
Not just IT’s problem
As the momentum of innovation ramps up, IT and departments focused on business transformation are increasingly reliant on DevOps and continuous delivery.
What may be surprising is that the drivers for continuous delivery are not purely an IT problem to solve. In many cases, they are business and consumer-related, according to EMA, which has been spearheading research on these topics over the past two years.
Companies that have been able to accelerate software speed by 10 percent are more likely to double their revenue growth than companies who aren’t focused on delivery frequency, EMA Research Director Julie Craig points out in this summary video:

Craig points out:
If you aren’t able to deliver software faster, then your competition is going to continue to outpace you in terms of growth. It’s when you get automation in place that you can seamlessly deliver software releases in a way that supports speed at scale, and both speed and scale support quality.
More key findings

Ninety-seven percent of respondents have DevOps teams within their companies with dedicated personnel, 60 percent of whom are considered dedicated employees.

Companies in which DevOps interactions were rated as excellent or above average were 11.5 times more likely to have double-digit revenue growth than those who rated these interactions as average or poor.

Production troubleshooting was the biggest bottleneck to accelerating continuous delivery.

Download the 17-page report summary, “Automating for Digital Transformation: Tools-Driven DevOps and Continuous Software Delivery in the Enterprise.”
 
The post Continuous delivery tops IT execs’ priority list appeared first on news.
Quelle: Thoughts on Cloud

Cloud-based Project DataWorks aims to make data accessible, organized

Increasingly, data is a form of currency in business. Not just the data itself, but the ability to find just the right piece of information at just the right time.
As organizations amass more and more data reaching into petabyte sizes, it can sometimes become diffuse, which can make it hard for someone to quickly find exactly the right key to unlock a barrier to progress.
To solve that challenge, IBM unveiled Project DataWorks last week, a cloud-based data organization catalog which puts all of a company’s data in one easy-to-access, intuitive dashboard. Here’s how TechCrunch describes Project DataWorks:
With natural language search, users can pull up specific data sets from those catalogs much more quickly than with traditional methods. DataWorks also touts data ingestion at speeds of 50 to 100s of Gbps.
The tool is available through the IBM Bluemix platform and uses Watson cognitive technology to raise its speed and usability.
In an interview with PCWorld, Derek Scholette, general manager of cloud data services for IBM Analytics, explained: &;Analytics is no longer something in isolation for IT to solve. In the world we&;re entering, it&8217;s a team sport where data professionals all want to be able to operate on a platform that lets them collaborate securely in a governed manner.&;
Project DataWorks is open to enterprise customers but it’s also open to small businesses. It’s currently available as a pay-as-you-go service.
For more, read the full articles at TechCrunch and PCWorld.
The post Cloud-based Project DataWorks aims to make data accessible, organized appeared first on news.
Quelle: Thoughts on Cloud

Develop Cloud Applications for OpenStack on Murano, Day 5: Uploading and troubleshooting the app

The post Develop Cloud Applications for OpenStack on Murano, Day 5: Uploading and troubleshooting the app appeared first on Mirantis | The Pure Play OpenStack Company.
We&;re in the home stretch! So far, we’ve explained what Murano is, created an OpenStack cluster with Murano, built the main script that will install our application, and packaged it as a Murano app. We&8217;re finally ready to deploy the app to Murano.
Now let’s upload the PloneServerApp package to Murano.
Add the Murano app to the OpenStack Application Catalog
To upload an application to the cloud:

Log into the OpenStack Horizon dashboard.
Navigate to Applications > Manage > Packages.
Click the Import Package button.

Select the zip package that we created yesterday and click Next.
In the pop-up window you can see the information that we added to the manifest.yaml file earlier. Also, we’ve got a notification message that Glance has started retrieving Ubuntu image mentioned in image.lst.  (This only happens if the image doesn&8217;t already exist in Glance.)

Now we just have to wait for the image to finish saving so we can move on to try out the app.  To check on that, go to Projects > Images. Wait for the status to be listed as Active rather than Saving.

Deploy the new app
Now that we&8217;ve created the app, it&8217;s time to test it out in all its glory.

Navigate to Applications > Catalog > Browse.

You will find that the Plone Server has appeared with the icon from our logo.png file. Click Quick deploy and you’ll see the configuration wizard appear, with all of the information we added to the ui.yaml file in the appConfiguration form:

Click on Assign Floating IP and click Next.
You’ll then see the instanceConfiguration form we mentioned in the ui.yaml file:

Choose an appropriate instance flavor. In my case I used a “m1.small” flavor and edited it to have: 1 CPU, 1GB RAM, and 20GB disk space. I also, shut down the Compute node VM and gave it more RAM in VirtualBox, with 2GB instead of 1GB. You can edit flavors by navigating from Admin > System > Flavors.
Be aware: that if you select a flavor that requires more hardware than your Compute node really has then you take an error during spawning an instance.
Choose the instance image that we mentioned in image.lst. If no images appear in the drop-down menu check that your image has finished uploading.
Choose a Key Pair or import it instantly by clicking the “+” button:

Click Next.
Set the Application Name and click Create:

The Plone Server application has now been successfully added to the newly created quick-env-1 environment. Click the Deploy This Environment button to start the deployment:

It may take some time for the environment to deploy:

Wait until the status has changed from Deploying to Ready:

Once it does, go to the Plone home page at http://172.16.0.134:8080 from your Host OS browser, this is, outside your OpenStack Cloud:

You should see the Plone home page. If you don&8217;t, you&8217;ll need to do some troubleshooting.
Debugging and Troubleshooting Your Murano App
While deploying your Murano App you may have encountered a number of errors. Some of them could be related to spawning a VM, others may have occured during runDeployPlone.sh execution.
For information on errors relating to spawning the VM, check the Horizon UI. Navigate Catalog > Environments then click the environment and open the Deployment History page. Click on the Show Details button located at the corresponding deployment row of the table and then go to the Logs tab. From there you can see the steps of deployment and ones that have failed will have a red color.
Several of the most frequently occurring errors, as well as their suggested solutions, are described in  Murano documentation.
The other type of errors relates to the app installing script runDeployPlone.sh. As you remember, we collect all output from this file in a special log-file, /var/log/runPloneDeploy.log to help you track any possible issues. By knowing the floating IP-address of the newly created VM for the Plone Server, we can access the log-file via an ssh-connection.
It&8217;s important to note, though, that because we applied a special Ubuntu image from the repository during the environment deployment, the login process has a security limitation. By default, the password authentication mechanism is turned off and the only way to connect to your VM is to use an access key pair. You can find out more about how to create and set this up here.
First log in to the VM as the default user, ubuntu:
$ ssh -i <private_key> ubuntu@<floating IP address>
You can then read the log:
$ less /var/log/runPloneDeploy.log
Now it’s possible to fix the errors that have appeared and polish the installation process.
Remember, when encountering issues with your Murano App, you can always contact the Murano team, or any other OpenStack related teams, through IRC. You can find the list of IRC channels here: IRC. Feel free to ask any questions.
Summary
In this series, we outlined the creation process of a Murano App for the ultimate enterprise CMS &; Plone. We also saw how to easily build a Murano App from the ground up and showed how it didn’t require you to be an OpenStack or Linux guru.
Murano is a great OpenStack service that provides application lifecycle management and dramatically simplifies the introduction of new software to the OpenStack community.
Moreover, it provides other great features not mentioned in this tutorial, such as High-Availability mode, Auto-Scaling or application dependencies management.
Try it out for yourself and get excited by how easy it is. Next time, we&8217;ll look at the steps needed to publish your Murano App in the OpenStack application catalog at http://apps.openstack.org.
Thanks for joining us!
The post Develop Cloud Applications for OpenStack on Murano, Day 5: Uploading and troubleshooting the app appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Direct Link connection options for hybrid clouds

Companies are striving to implement more cost-effective IT environments using hybrid clouds that include both on-premises and off-premises resources. Direct Link on IBM SoftLayer, introduced in my previous blog post, helps IBM Cloud clients integrate their private, public and hybrid clouds with high performance, security-rich connections.
Here’s how Direct Link connects your IT resources.
Physical connectivity
Physical connectivity occurs through a dedicated fiber connection (one or 10 Gigabits per second) that links a customer’s service equipment and the network equipment in an IBM point of presence (PoP). The method of connection depends on the type of connection:

Direct Link NSP: Customers directly connecting their existing data centers to the IBM Cloud would terminate a telco-provided Ethernet, MPLS, DWDM or SONET connection into their service endpoint equipment within an IBM PoP, then run a fiber cross-connect from that equipment to the IBM patch panel. The panel connects to ports on the IBM cross-connect router (XCR) in the PoP, which routes traffic to an IBM data center and ultimately through IBM’s carrier-grade, private global IP network to its final destination. Customers are responsible for purchasing the physical fiber cross-connect.
Direct Link colocation: Customers with colocation facilities within the same building or campus as an existing IBM data center would terminate a redundant fiber connection into a cross-connect panel installed in the POD (server farm) where their compute resources are provisioned. Customers are responsible for purchasing the fiber cross connect from their colocated facility to the IBM data center patch panel, which can span different floors in the same building or different buildings.
Direct Link cloud exchange: Customers using an IBM-approved cloud exchange partner would terminate a telco WAN service into an IBM PoP (same as Direct Link NSP) and also into the cloud exchange partner’s software-defined network, creating secure point-to-multipoint virtual connections among their private networks, the cloud exchange and the IBM Cloud.

(See the Direct Link FAQ for more information.)

Network connectivity
Network connectivity is defined by routing policies for an organization’s IP address space. Public IP addresses are universally assigned and may be static (reserved for a particular network resource, such as a server) or dynamic (they change as resource demands change but come from an assigned public pool). Private IP addresses are used for internal traffic shielded from the public internet by network elements such as routers or firewalls. This makes it possible for the same IP address space to be used by multiple parties simultaneously without conflict.
IBM Direct Link has three network connectivity options:

Dual IP remote hosts: Customers add additional IP addresses (or reassign the IP addresses for existing on-premises, colocated or cloud exchange hosts) to include public IP address space assigned to the IBM Cloud. This allows IBM to securely route customer traffic between the customer’s private network and the IBM network.
Network address translation (NAT): Customers configure NAT on network elements acting as private network gateways (usually a router or firewall). This allows assigned public IP address space to be used on private networks without conflict, since public IPs are converted to private IPs (and vice versa) as they cross the NAT gateway. For customers with private IPs that conflict with IBM IPs, NAT can be provisioned in both directions (source and destination).
Bring-your-own IP (BYOIP): Customers bring their assigned public IP address space into the IBM private network. Customers must create generic routing encapsulation (GRE) or IP security (IPSec) tunnels between their on-premises or colocation network and the IBM network. They can then use any IP address space they choose on the private network, as long as there are no conflicts with IBM’s public or private address space, and then route traffic across the tunnel between networks.

These methods of connectivity contribute to providing the security and performance that hybrid clouds require in enterprise-scale IT environments.
To learn more about Direct Link and other features and technology available with IBM SoftLayer, check out our Cloud How-To webcast series.
The post Direct Link connection options for hybrid clouds appeared first on news.
Quelle: Thoughts on Cloud