Senior Software Engineer

The post Senior Software Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis has more experience delivering OpenStack clouds to more customers than any other company. We build the infrastructure that makes OpenStack work. We are profitable, have strong investors, and ample cash reserves. We are proud to be Founding sponsors of OpenStack, and to serve on the OpenStack Foundation Board.What Linux was to open source and operating systems, OpenStack is to . It makes programmable infrastructure vendor-neutral and frictionless to access, not to mention it unlocks distributed applications and accelerates innovation. OpenStack transforms virtualization from an efficiency to a whole new compute paradigm.If you’re an ambitious Python software engineer and thrive on working on tough, real-world problems, you want to work here. We are building a data center management product (FUEL) which can become the standard choice of a simple but powerful OpenStack deployment tool. We want you to bring ideas of how it should look and implement it. With the tool presently used to deploy OpenStack at the moment, we have an ambitious roadmap which includes live upgrades, network devices management, SDN devices integration and many more.Technically FUEL consists of JavaScript driven UI which communicates with the REST API backend, written in Python. The backend stores information in the DB, and has complex logical pieces for management of OpenStack networks, disks and some other entities. It then passes the work to a deployment orchestrator, which calls different agents on nodes, including Puppet for the actual deployment.Primary Responsibilities:Design and implement new features for FUEL in Python (backend)Take the lead on development process when neededDrive the collaboration process with other team membersExchange ideas and find ways to improve the FUEL and OpenStackQualifications:Excellent communication skillsExperience or strong desire in technical leadershipExcellent knowledge of the Python ecosystemKnowledge of Linux and L2-L7 networkingBig PlusTechnical leadership in pastExperience with DevOps practices, Chef & PuppetExperience with HA solutions (e.g. – for MySQL, RabbitMQ)What We Offer:Work with exceptionally passionate, talented and engaging colleaguesStrong benefits planLots of freedom for creativity and personal growthMedical insuranceThe post Senior Software Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

5 must-haves for building the right cloud from Elevate NYC 2016

A few years ago, the question facing IT leaders was, “Are you moving to the cloud?” Now, the question has become, “Are you getting the most out of your cloud?”
Organizations in every part of the economy — from retail and finance to government and education —  have adopted the cloud as an integral part of their operations. Most of these organizations, though, are still trying to figure out the best way to build their cloud.
Leaders from some of the top companies gathered this week in New York at the Fortune Elevate NYC conference, sponsored by IBM, to try to answer this new challenge. Five must-haves emerged for creating the right cloud environment:

Data must be connected. Every part of the network — public and private clouds — must be seamlessly tied together. Data only matters if the right people know about it and can access it quickly.
Data must be secure. The ever-expanding cloud and collection points for data create new gateways into an organization that must be shored up.
Design should be intentional. Every operation, including IT, should be designed from the start to work together to support business goals and consumer needs.
Innovation should be the goal. Lower costs attracted many companies to the cloud, but the long-term winners are using the cloud to create new value.
Culture leads the way. Cloud, like any technology, is only a tool for supporting change. True change comes when employees at every level buy into a new way of doing business.

Elevate NYC panelists agreed that assembling the elements to build a successful cloud strategy is more important than ever, because cloud is the foundation for almost every emerging enterprise technology. Artificial intelligence was held up as a major example. Cloud delivers the data and computing power needed to make AI truly intelligent.
“The cloud is not just a nice place to do AI,” said Jason McGee, IBM Fellow, Vice President and Chief Technology Officer of the IBM Cloud platform. “It’s a requirement.”
The same can be said of virtual reality (VR), blockchain and Internet of Things (IoT) solutions. Cloud is the connective tissue binding together the resources to create and provide these technologies. Cloud provides the network for gathering and delivering VR, a secure and efficient shared ledger supporting blockchain, and the data collection and analytics behind IoT.
Organizations must focus on building the right cloud for their business goals. They also need to have the right culture to take advantage of the cloud in fast-changing markets with new competitors and consumer demands, the panelists said.
“Spend as much time on the cultural side of the business as you do on the technology side,” Bryson Koehler, Chief Information and Technology Officer for the Weather Company, an IBM business, said at Elevate NYC. “As the technology wheel continues to spin faster, we need to make sure our teams are recognizing the need for a completely new way of thinking.”
You can find out more about how to create a cloud for your business needs in the IBM report, “Tailoring hybrid cloud: Designing the right mix for innovation, efficiency and growth.”
The post 5 must-haves for building the right cloud from Elevate NYC 2016 appeared first on news.
Quelle: Thoughts on Cloud

Bailian: From Brick & Mortar to Brick & Click using OpenStack, DevOps

The post Bailian: From Brick &; Mortar to Brick &038; Click using OpenStack, DevOps appeared first on Mirantis | The Pure Play OpenStack Company.
Being an established player in a market can definitely have its advantages. If you&;re big enough, there are advantages of scale and barriers to entry that can make it possible to get comfortable in your market.
But what happens when the market flips on its ear?
This was the situation in which Shanghai-based Bailian group found itself in several years ago. China&8217;s largest retailer, the chain of more than 6000 grocery and department stores was spread all over the country.
Many of the brick-and-mortar company&8217;s online competitors, such as JD.com, Suning, and Taobao were introducing new sites and campaigns, and other traditional enterprises were moving to a multi-channel strategy.  In 2014, Bailian decided to join them.
Chinese consumers bought close to $600 billion in online goods during 2015, a 33 percent increase from the prior year. The company knew that if it were going to survive, it had to solve several major problems:

Lack of agility: Some applications were not cloud native and took months to update, and waiting for a new server could take weeks, slowing development of new applications to a crawl.
Server underutilization: As much hardware as Bailian was using, there was still a huge amount of unused capacity that represented wasted money. It had to be streamlined and simplified.

The company set out to create the largest offline to online commerce platform in the industry &; and to do that, they had to replace their existing IT infrastructure.
Choosing a platform
“Our transition from traditional brick and mortar to omni-channel business presented a great opportunity but an equally large challenge,” says Lu Qichuan, Director of IaaS and Cloud Integration Architecture, Bailian Group. “We needed a large scale IT platform that would enable our innovation and growth.” Thinking big, Lu and his team outlined four guiding principles for their new platform — fast development, dynamic scaling, uncompromised availability, and low cost of operations. These guidelines would support aggressive online growth targets through 2020.
And it wasn&8217;t as though Bailian was a stranger to online commerce. The company was already running a Shanghai grocery delivery service, on its existing IT platforms. But it knew that its existing applications, which were not yet cloud-ready, weren&8217;t just complex to support; they also required long development cycles. Add to this the desire to not just port legacy applications such as supply chain logistics and data management to the new, more flexible infrastructure, but also to reclaim applications running on public cloud, and the way forward was clear: private cloud was what Bailian needed.
But which? The company had already zeroed in on many of the advantages of OpenStack. In particular, Bailian Group was impressed by the platform’s continuous innovation, with rich new feature sets every six months.  The IT team also valued OpenStack’s lower licensing and maintenance cost, flexible architecture, and its complete elimination of vendor lock in.
Finally, Bailian Group is a state-owned enterprise, so when China&8217;s Ministry of Industry and Information (MIIT) officially declared its support for the OpenStack ecosystem, the decision was straightforward.
Bailian Group then selected the OpenStack managed services of UMCloud, the Shanghai-based joint venture between Mirantis and UCloud, China’s largest independent public cloud provider. UMCloud’s charter to accelerate OpenStack adoption and embrace China’s “Internet Plus” national policy closely matched Bailian Group’s platform strategy. “We found OpenStack to be the most open and flexible cloud technology, and Mirantis and UMCloud to be the best partners to help us launch our new omni-channel commerce platform,” says Lu.
Start small, think big, scale fast
Bailian Group’s IT leaders worked with Mirantis and UMCloud to quickly build a 20-node MVP (minimum viable product) using the latest OpenStack distribution and Fuel software to deploy and manage all cloud components. The architecture included Ceph distributed storage, Neutron and OVS software defined networking, KVM virtualization, F5 load balancers, and the StackLight logging, monitoring and alerting (LMA) toolchain.

With this early success, the team quickly added capacity and will soon reach 300 nodes and 5000 VMs in this first phase of a three phase, five-year plan. Already a handful of applications are in production on the new platform, including one that manages offline-to-online store advertisement images using distributed Ceph storage. The team has also added new cloud application development tools and processes that foster a CI/CD and DevOps culture and increase innovation and time-to-market. This development environment includes a PaaS platform powered by the Murano application catalog and Sahara for data analysis.  
For phase two, the IT team anticipates expanding the OpenStack platform to 500 nodes across two data centers and more than 10,000 applications by the end of 2018. Phase two will also add a Services Oriented Architecture (SOA), microservices, and dynamic energy savings.
Embracing the strategy of starting small, thinking big, and scaling fast, phase three will extend to 3000 nodes and over 10 million virtual machines and applications by the end of 2020. Phase three will also add an industry cloud and SaaS services that drive prosperity of the retail business and show other retailers the processes and benefits of cloud platform innovation and offline to online digital transformation.
Interested in more information about how Bailian Group is making the most of OpenStack to solve its agility problems? Get the full case study.
The post Bailian: From Brick &038; Mortar to Brick &038; Click using OpenStack, DevOps appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Scaling OpenStack With a Shared-Nothing Architecture

The post Scaling OpenStack With a Shared-Nothing Architecture appeared first on Mirantis | The Pure Play OpenStack Company.
When it comes to pushing the boundaries of OpenStack scaling, there are basically two supported constructs: Cells and Regions. With Nova Cells, instance database records are divided across multiple “shards” (i.e., cells).  This division ensures that we can keep scaling our compute capacity without getting bogged down by the limitations of a single relational database cluster or message queue.  This is what we mean by a shared-nothing architecture: Scaling a distributed system without limits by removing single points of contention.
However, in OpenStack, cells currently only exist for Nova.  If we want to extend this kind of paradigm to other OpenStack services such as Neutron, Ceilometer, and so on, then we have to look to OpenStack Regions.  (You may already be looking at using regions for other reasons &; for example, to optimize response times with proximity to various geographic localities.)
There are many ways of implementing regions in OpenStack.  You will find online references that show the same Keystone & Horizon shared between multiple regions, with some variants throwing in Glance too, while others exclude Keystone.  These are all variations in expressing the degree to which we want to share a set of common services between multiple cloud environments, versus keeping them separate.  To depict the extremes (sharing everything, vs sharing nothing):

Shared services offer the convenience of a central source of truth (e.g., for user, tenant, and role data in the case of Keystone), a single point of entry (e.g., Keystone for auth or Horizon for the dashboard), and can be less trouble than deploying and managing distributed services.
On the other hand, with this paradigm we can’t horizontally scale the relational database behind Keystone, Horizon’s shared session cache, or other single points of contention that are created when centralizing one of the control plane services.
Beyond scaling itself though, let’s take a look at some other points of discussion between the two:
Flexibility
The shared-nothing paradigm offers the flexibility to support different design decisions and control plane optimizations for different environments, providing a contrast to the “one size fits all” control plane philosophy.
It also permits the operation of different releases of OpenStack in different environments.  For example, we can have a “legacy cloud” running an older/stable OpenStack, at the same time as an “agile cloud” running a more recent, less stable OpenStack release.
Upgrades & Updates
OpenStack has been increasingly modularized by projects that specialize in doing one specific thing (e.g., the Ironic project was a product of the former bare metal driver in Nova).  However, despite this modularization, there remains a tight coupling between most of these components, given their need to work together to make a fully functioning platform.
This tight coupling is a hardship for upgrades, as it often requires a big-bang approach (different components that have to be upgraded at the same time because they won’t work properly in an incremental upgrade scenario or with mixed versions).  Most of the upstream testing is focused on testing of the same versions of components together, not in the mixing of them (especially as we see more and more projects make their way into the big tent).
When we don’t share components between clouds, we open the possibility of performing rolling upgrades that are fully isolated and independent of other environments.  This localizes any disruptions from upgrades, updates, or other changes to one specific environment at a time, and ultimately allows for a much better controlled, fully automated, and lower risk change cycle.
Resiliency & Availability
When sharing components, we have to think about common modes of failure.  For example, even if we deploy Keystone for HA, if we have corruption in the database backend, or apply schema updates (e.g., for upgrades), or take the database offline for any other maintenance reasons, these will all cause outages for the service as a whole, and by extension all of your clouds that rely on this shared service.
Another example: Suppose you are using PKI tokens and you need to change the SSL keys that encode and decode tokens.  There is not really any graceful way of doing this transition: you have to do hard cut-over to the new key on all Keystone nodes at the same time, purge all cached signing files stored by every other openstack service, and revoke all tokens issued under the old key.
Also, denial of service attacks are both easier to perform and more impactful with shared infrastructure elements.
In contrast, the shared-nothing approach removes common modes of failure and provides full isolation of failure domains.  This is especially relevant for cloud native apps that deploy to multiple regions to achieve their SLAs, where failures are taken to be independent, and where the presence of common modes of failure can invalidate the underlying assumptions of this operational model.
Performance & Scaling
When distributing services, degraded or underperforming shards do not affect the performance or integrity of other shards.  For example, in times of high loading, or denial of service attacks (whether or not malicious in nature), the impacts of these events will be localized and not spread or impact other environments.
Also, faster API response times may be realized (since requests can be processed locally), as well as lower utilization of WAN resources.  Even small latencies can add up (e.g., Keystone calls in particular should be kept as fast as possible to maximize the response time for the overall system).
Scaling out is a simple matter of adding more shards (regions).  As mentioned previously, this also helps get around the fact that we have components that cannot otherwise be horizontally scaled, such as the horizon shared session cache or relational database backend.
Design Complexity
An important factor to consider with any deployment paradigm is: “How close is this to the reference upstream architecture?”  The closer we stay to that, the more we benefit from upstream testing, and the less we have to go out and develop our own testing for customizations and deviations from this standard.
Likewise from the operations side, the closer we stick to that reference architecture, the easier time we have with fault isolation, troubleshooting, and support.
If your organization is also doing some of their own OpenStack development, the same statement could also be made about your developers: In effect, the closer your environment is to something that can be easily reproduced with DevStack, the lower the barrier of entry is for your developers to onboard and contribute.  And regardless of whether you are doing any OpenStack development, your dev and staging environments will be easier to setup and maintain for the same reasons.
The elegance of the shared-nothing approach is that it allows you to use this standard, reference deployment pattern, and simply repeat it multiple times.  It remains the same regardless of whether you deploy one or many.  It aims to commoditize the control plane and make it into something to be mass produced at economies of scale.
Challenges
There are two key challenges/prerequisites to realizing a shared-nothing deployment pattern.
The first challenge is the size of the control plane: It should be virtualized, containerized, or at least miniaturized in order to reduce the footprint and minimize overhead of having a control plane in each environment.  This additional layer may increase deployment complexity and brings its own set of challenges, but is becoming increasingly mainstream in the community (for example, see the TripleO and Kolla openstack projects, which are now part of the big tent).
The second challenge is the management and operational aspects of having multiple clouds.  Broadly speaking, you can classify the major areas of cloud management as follows:

Configuration Management (addressed by CM systems like Ansible, Puppet, etc)
OpenStack resource lifecycle management.  Specifically we are interested in those resources that we need to manage as cloud providers, such as:

Public Images
Nova flavors, host aggregates, availability zones
Tenant quotas
User identities, projects, roles
Floating/public networks
Murano catalogs
VM resource pools for Trove or other aaS offerings

Coordinated multi-cloud resource lifecycle management is a promising possibility, because it permits us to get back some of what we sacrificed when we decentralized our deployment paradigm: the single source of truth with the master state of these resources.  But rather than centralizing the entire service itself, we centralize the management of a set of distributed services.  This is the key distinction with how we manage a set of shared-nothing deployments, and leverage the relatively backwards-compatible OpenStack APIs to do multi-cloud orchestration, instead of trying to synchronize database records with an underlying schema that is constantly changing and not backwards-compatible.
What we could envision then is a resource gateway that could be used for lifecycle management of OpenStack resources across multiple clouds.  For example, if we want to push out a new public image to all of our clouds, then that request could be sent to this gateway which would then go and register that image in all our clouds (with the same image name, UUID, and metadata to each Glance API endpoint).  Or as an extension, this could be policy driven &8211; e.g., register this image only in those clouds in certain countries, or where certain regulations don’t apply.
In terms of CAP theory, we are loosening up consistency in favor of availability and partition tolerance.  The resources being managed could be said to be “eventually consistent”, which is reasonable given the types of resources being managed.
Also note that here, we only centralize those resources that cloud operators need to manage (like public images), while private image management is left to the user (as it would be in a public cloud setting).  This also gives the end-user the most control about what goes where &8211; for example, they don’t have to worry about their image being replicated to some other location which may increase their image’s exposure to security threats, or to some other country or jurisdiction where different data laws apply.
There have been a number of implementations designed to address this problem, all originating from the telco space.  Kingbird (started by OPNFV; open source) and ORM (by AT&T, with plans to open source by Q4 2016 &8211; Q1 2017) can be classified as resource management gateways.  Tricircle (Telco working group and OPNFV; open source) is another community project which also has similar aims.
It will be very interesting to see how these projects come along this year, and to what degree we see a community standard emerge to define the way we implement shared-nothing.  It would also be great to get feedback from anyone else out there who is thinking along similar lines, or if they know of any other implementations that I missed in the list above.  Feel free to comment below!
The post Scaling OpenStack With a Shared-Nothing Architecture appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

The future is all cloud and AI

Digital transformation has become an ongoing process rather than a one-time goal, with market-attuned companies continually on the hunt for the next big technology shift that gives them a competitive advantage.
That next big shift is the fusion of artificial intelligence and , which promises to be both a source of innovation and a means to accelerate change. With pervasive AI and cognitive capabilities underpinned by the cloud, digital pioneers in today’s data-intensive world have the potential to harvest and build value from this unprecedented amount of data.
A new IBM study, “The cognitive advantage,” reveals that 65 percent of early adopters believe AI is very important to their organizational strategy and success. More than half say AI is essential to digital transformation. What’s more, they see it as a “must have” to remain competitive within the next few years.
As AI capabilities increase, so will the demand for cloud. IDC predicts the cognitive market will hit $31 billion by 2019. Coupled with Forrester’s prediction that the public cloud market will hit $146 billion in 2017, it’s clear that AI and cloud will be interdependent and essential.
AI enabled by cloud
The experiences of early adopters show that enabling technologies plays a significant role in AI adoption. Pervasive AI is underpinned by pervasive cloud. Ninety percent of early adopters say cloud will play an important role in their AI initiatives within two years. Fifty-five percent of users prefer cloud-based services and leverage both software as a service (SaaS) and platform as a service (PaaS) to develop and deliver AI-infused solutions. They’re also making heavy use of open source technology to support these initiatives.

The AI advantage
The AI future is one in which automation and intelligence are pervasive, even if users aren’t aware of it. For early adopters, differentiation with AI comes from the strategic use of key capabilities such as machine learning, pattern recognition and intelligent robotics. Applied to large volumes of data, these capabilities unlock new value from internal and external data, both structured and unstructured.
Organizations already use AI in high-value use cases that range from customer engagement to the data center. In fact, IT-focused use cases top the list of AI priorities for early adopters, with 77 percent of advanced users adopting it for product and service innovation, followed by IT automation and business process automation. The IT flavor of priority use cases echoes the market trend to infuse AI in every application, platform and process across the business.
Though organizations can start with a pilot or target a larger transformational project, the end goal is not simply about tacking AI on as a new capability within the organization. Rather, with the addition of new use cases and new technologies, AI is becoming an essential ingredient of business strategy and technology execution.
Early adopters also reveal that IT and data analytics are often the first functions to kickstart AI initiatives in their organization. As the center of the nervous system, using IT as a test bed for AI pilots is a natural starting point to inject intelligence into other business functions including marketing, HR and customer service.
Learn more about ­­­­­­the power of cognitive computing with IBM Cloud.
 
The post The future is all cloud and AI appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Solutions Architect- UK

The post Solutions Architect- UK appeared first on Mirantis | The Pure Play OpenStack Company.
Are you the type of engineer that can’t go to bed until you solved that last minute problem?  Do you try and automate your code to make life easier?  Do you go to technology Meetups to connect with your peers?  Do you love sharing and explaining the technology that you have built?  If any of these sounds like you then Mirantis has a home for you and your killer skills.  The Mirantis PreSales team is a team of individuals that collectively can solve any problem from storage, networking, or compute and have backgrounds from service providers to leading technology vendors.  We are constantly building and testing new solutions and presenting them to customers around the world.  The Mirantis PreSales team is the tip of the spear for the entire organization and we rely on you to build unparalleled relationships with our customers.You’ll be charged with bringing new offerings to market, supporting sales, dazzling customers and delivering strategy and proof of concept engagements.Primary ResponsibilitiesDevelop and deliver custom solutions and presentations, including advanced technical concepts, to key decision makers to address their business issues and needsMatch Mirantis platform subscriptions and services to customer’s business and technical requirementsEnsure that customer deliverables for projects are delivered on time, and with the highest quality levelBuild long-term business relationships and become a trusted adviser within each named account and maintain customer contact from project conception to completionAcquire the technical and product skills required to meet customer requirementsBuild and leverage strong OEM and partner relationshipsQualifications2-5 years of proven commercial experience including roles in pre-sales or 7+ years as a Senior/Principal level engineer focusing on full stack infrastructure design and architectureWork with Sales to help qualify leads, respond to RFI/RFP’s, and be the Technical Evangelist role within large enterprise prospectsHands on experience with x86 servers and advanced Linux knowledgeBasic understanding of IP networking routing and network topologiesHands on experience with virtualization and exposure to cloud/ OpenStackAbility to quickly learn and understand new technologies used in the datacenterHands on experience with two or more of the following:High Availability ClusteringVirtualization Technologies – KVM, ESX Server, Xen, AWSInternet Infrastructure Services – Apache HTTPD, FTP Servers, DNS, etc…Database TechnologiesOpenLDAP/eDirectory/FDS – Directory Services a plusSoftware defined storage such Ceph and SwiftTop of rack and core network equipmentEnterprise SAN/NAS storage equipmentSDN technologies such as NSX, Contrail, or PLUMgridAbility to manage multiple issues and projects while maintaining a high level of detail.Experience managing customer relationship in consulting, account management, or direct support contextBasic shell scripting skills, python programming and Linux software packaging experience a plusRelevant IT certification at engineer level, such as RHCE, LPIC-2, CNE, etc., nice-to-have3+ years or equivalent experience in Linux or UNIX system administration or development.Superior presentation skillsAbility to be productive in a globally distributed team through self-discipline and self-motivationOther Requirements:Ability to lift 22kgUp to 25% travel with occasional trips to HQ or technology conferencesThe post Solutions Architect- UK appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Reference architectures can speed your route to cloud

One of the most difficult aspects of choosing a cloud road map is knowing where to start. In some respects, it’s also tough to know where to finish, even if there is already some cloud adoption in the organization.
It’s a journey that thousands of organizations have already traveled. If only there was a way to emulate their success and tap into their expertise (as well as avoid the mistakes they made). Well, good news: there is.
The IBM Cloud Architecture Center provides this expertise in the form of dozens of reference architectures, organized into a library under the headings cognitive, data and analytics, DevOps, e-commerce, hybrid, Internet of Things, micoservices, and mobile.
An IBM reference architecture is a design blueprint based on clients’ real-life experiences implementing cloud projects; not just one or two, but hundreds or thousands. Just as a builder may use a set of blueprints to successfully build houses, reference architectures can be reused to successfully build working IT architectures.
It’s important that the reference architecture is at the right level of granularity. This is made possible by micro and macro patterns. Just as a builder’s blueprint for a house may be divided into macro patterns of for the upstairs and downstairs, and then into micro-patterns of particular rooms such as the kitchen and bathroom, likewise, a cloud reference architecture for a hybrid data warehouse may be broken down into macro patterns for data sources and data Integration, and micro patterns around the deployment of actual products such as IBM Bluemix Data Connect. Micro patterns are often split into groups of use cases, that can be broken down into the individual actions that get the job done; for example, “build kitchen cupboard” or “provision IBM dashDB.”
An important thing to remember about patterns is that they are repeatable sets of actions which achieve particular outcomes. The best patterns are those which have been developed over time, because as they update and evolve. They have more and more experience built into them and are more likely to be accurate, resulting in the desired outcome.
Using a good reference architecture therefore saves time. There is surety to the outcome and no blind alleys where work has to be restarted to correct a mistake. Using the patterns within the reference architecture will also save on costs, because organizations make investments only in the tools that are proven necessary, and the entire process is more predictable.
As well as architecture patterns, the IBM Cloud Architecture Center also provides sample code and demos to really get your cloud apps going.
Why reinvent the wheel when IBM reference architectures can get you motoring to the cloud?
The post Reference architectures can speed your route to cloud appeared first on news.
Quelle: Thoughts on Cloud

How do I create a new Docker image for my application?

The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
In our previous series, we looked at how to deploy Kubernetes and create a cluster. We also looked at how to deploy an application on the cluster and configure OpenStack instances so you can access it.  Now we&;re going to get deeper into Kubernetes development by looking at creating new Docker images so you can deploy your own applications and make them available to other people.
How Docker images work
The first thing that we need to understand is how Docker images themselves work.
The key to a Docker image is that it&8217;s alayered file system. In other words, if you start out with an image that&8217;s just the operating system (say Ubuntu) and then add an application (say Nginx), you&8217;ll wind up with something like this:

As you can see, the difference between IMAGE1 and IMAGE2 is just the application itself, and then IMAGE4 has the changes made on layers 3 and 4. So in order to create an image, you are basically starting with a base image and defining the changes to it.
Now, I hear you asking, &;But what if I want to start from scratch?&; Well, let&8217;s define &8220;from scratch&8221; for a minute. Chances are you mean you want to start with a clean operating system and go from there. Well, in most cases there&8217;s a base image for that, so you&8217;re still starting with a base image.  (If not, you can check out the instructions for creating a Docker base image.)
In general, there are two ways to create a new Docker image:

Create an image from an existing container: In this case, you start with an existing image, customize it with the changes you want, then build a new image from it.
Use a Dockerfile: In this case, you use a file of instructions &; the Dockerfile &8212; to specify the base image and the changes you want to make to it.

In this article, we&8217;re going to look at both of those methods. Let&8217;s start with creating a new image from an existing container.
Create from an existing container
In this example, we&8217;re going to start with an image that includes the nginx web application server and PHP. To that, we&8217;re going to add support for reading RSS files using an open source package called SimplePie. We&8217;ll then make a new image out of the altered container.
Create the original container
The first thing we need to do is instantiate the original base image.

The very first step is to make sure that your system has Docker installed.  If you followed our earlier series on running Kubernetes on OpenStack, you&8217;ve already got this handled.  If not, you can follow the instructions here to do just deploy Docker.
Next you&8217;ll need to get the base image. In the case of this tutorial, that&8217;s webdevops/php-nginx, which is part of the Docker Hub, so in order to &8220;pull&8221; it you&8217;ll need to have a Docker Hub ID.  If you don&8217;t have one already, go to https://hub.docker.com and create a free account.
Go to the command line where you have Docker installed and log in to the Docker hub:
# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
Username: nickchase
Password:
Login Succeeded

We&8217;re going to start with the base image.  Instantiate webdevops/php-nginx:
# docker run -dP webdevops/php-nginx
The -dP flag makes sure that the container runs in the background, and that the ports on which it listens are made available.
Make sure the container is running:
# docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
1311034ca7dc        webdevops/php-nginx   “/opt/docker/bin/entr”   35 seconds ago      Up 34 seconds       0.0.0.0:32822->80/tcp, 0.0.0.0:32821->443/tcp, 0.0.0.0:32820->9000/tcp   small_bassi

A couple of notes here. First off, because we didn&8217;t specify a particular name for the container, Docker assigned one.  In this example, it&8217;s small_bassi.  Second, notice that there are 3 ports that are open: 80, 443, and 9000, and that they&8217;ve been mapped to other ports (in this case 32822, 32821 and 32820, respectively &8212; on your machine these ports will be different).  This makes it possible for multiple containers to be &8220;listening&8221; on the same port on the same machine.  So if we were to try and access a web page being hosted by this container, we&8217;d do it by accessing:

http://localhost:32822

So far, though, there aren&8217;t any pages to access; let&8217;s fix that.
Create a file on the container
In order for us to test this container, we need to create a sample PHP file.  We&8217;ll do that by logging into the container and creating a file.

Login to the container
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#
Using exec with the -it switch creates an interactive session for you to execute commands directly within the container. In this case, we&8217;re executing /bin/bash, so we can do whatever else we need.
The document root for the nginx server in this container is at /app, so go ahead and create the /app/index.php file:
vi /app/index.php

Add a simple PHP routine to the file and save it:
<?php
for ($i; $i < 10; $i++){
    echo “Item number “.$i.”n”;
}
?>

Now exit the container to go back to the main command line:
root@1311034ca7dc:/# exit

Now let&8217;s test the page.  To do that, execute a simple curl command:
# curl http://localhost:32822/index.php
Item number
Item number 1
Item number 2
Item number 3
Item number 4
Item number 5
Item number 6
Item number 7
Item number 8
Item number 9

Now that we know PHP is working, it&8217;s time to go ahead and add RSS.
Make changes to the container
Now that we know PHP is working we can go ahead and add RSS support using the SimplePie package.  To do that, we&8217;ll simply download it to the container and install it.

The first step is to log back into the container:
# docker exec -it small_bassi /bin/bash
root@1311034ca7dc:/#

Next go ahead and use curl to download the package, saving it as a zip file:
root@1311034ca7dc:/# curl https://codeload.github.com/simplepie/simplepie/zip/1.4.3 > simplepie1.4.3.zip

Now you need to install it.  To do that, unzip the package, create the appropriate directories, and copy the necessary files into them:
root@1311034ca7dc:/# unzip simplepie1.4.3.zip
root@1311034ca7dc:/# mkdir /app/php
root@1311034ca7dc:/# mkdir /app/cache
root@1311034ca7dc:/# mkdir /app/php/library
root@1311034ca7dc:/# cp -r s*/library/* /app/php/library/.
root@1311034ca7dc:/# cp s*/autoloader.php /app/php/.
root@1311034ca7dc:/# chmod 777 /app/cache

Now we just need a test page to make sure that it&8217;s working. Create a new file in the /app directory:
root@1311034ca7dc:/# vi /app/rss.php

Now add the sample file.  (This file is excerpted from the SimplePie website, but I&8217;ve cut it down for brevity&8217;s sake, since it&8217;s not really the focus of what we&8217;re doing. Please see the original version for comments, etc.)
<?php
require_once(‘php/autoloader.php’);
$feed = new SimplePie();
$feed->set_feed_url(“http://rss.cnn.com/rss/edition.rss”);
$feed->init();
$feed->handle_content_type();
?>
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
<div class=”header”>
<h1><a href=”<?php echo $feed->get_permalink(); ?>”><?php echo $feed->get_title(); ?></a></h1>
<p><?php echo $feed->get_description(); ?></p>
</div>
<?php foreach ($feed->get_items() as $item): ?>
<div class=”item”>
<h2><a href=”<?php echo $item->get_permalink(); ?>”><?php echo $item->get_title(); ?></a></h2>
<p><?php echo $item->get_description(); ?></p>
<p><small>Posted on <?php echo $item->get_date(‘j F Y | g:i a’); ?></small></p>
</div>
<?php endforeach; ?>
</body>
</html>

Exit the container:
root@1311034ca7dc:/# exit

Now let&8217;s make sure it&8217;s working. Remember, we need to access the container on the alternate port (check docker ps to see what ports you need to use):
# curl http://localhost:32822/rss.php
<html>
<head><title>Sample SimplePie Page</title></head>
<body>
       <div class=”header”>
               <h1><a href=”http://www.cnn.com/intl_index.html”>CNN.com – RSS Channel – Intl Homepage – News</a></h1>
               <p>CNN.com delivers up-to-the-minute news and information on the latest top stories, weather, entertainment, politics and more.</p>
       </div>

Now that we have a working container, we can turn it into a new image.
Create the new image
Now that we have a working container, we want to turn it into an image and push it to the Docker Hub so we can use it.  The name you&8217;ll use for your container typically will have three parts:
[username]/[imagename]:[tags]
For example, my Docker Hub username is nickchase, so I am going to name version 1 of my new RSS-ified container
nickchase/rss-php-nginx:v1

Now, if when we first started talking about differences between layers you started to think about version control systems, you&8217;re right.  The first step in creating a new image is to commit the changes that we&8217;ve already made, adding a message about the changes and specifying the author, as in:
docker commit -m “Message” -a “Author Name” [containername] [imagename]
So in my case, that will be:
# docker commit -m “Added RSS” -a “Nick Chase” small_bassi nickchase/rss-php-nginx:v1
sha256:148f1dbceb292b38b40ae6cb7f12f096acf95d85bb3ead40e07d6b1621ad529e

Next we want to go ahead and push the new image to the Docker Hub so we can use it:
# docker push nickchase/rss-php-nginx:v1
The push refers to a repository [docker.io/nickchase/rss-php-nginx]
69671563c949: Pushed
3e78222b8621: Pushed
5b33e5939134: Pushed
54798bfbf935: Pushed
b8c21f8faea9: Pushed

v1: digest: sha256:48da56a77fe4ecff4917121365d8e0ce615ebbdfe31f48a996255f5592894e2b size: 3667

Now if you list the images that are available, you should see it in the list:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nickchase/rss-php-nginx   v1                  148f1dbceb29        11 minutes ago      677 MB
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now let&8217;s go ahead and test it.  We&8217;ll start by stopping and removing the original container, so we can remove the local copy of the image:
# docker stop small_bassi
# docker rm small_bassi

Now we can remove the image itself:
# docker rmi nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx:v1
Untagged: nickchase/rss-php-nginx@sha256:0a33c7a25a6d2db4b82517b039e9e21a77e5e2262206fdcac8b96f5afa64d96c
Deleted: sha256:208c4fc237bb6b2d3ef8fa16a78e105d80d00d75fe0792e1dcc77aa0835455e3
Deleted: sha256:d7de4d9c00136e2852c65e228944a3dea3712a4e7bcb477eb7393cd309be179b

If you run docker images again, you&8217;ll see that it&8217;s gone:
# docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
nginx                     latest              abf312888d13        3 days ago          181.5 MB
webdevops/php-nginx       latest              93037e4c8998        3 days ago          675.4 MB
ubuntu                    latest              e4415b714b62        2 weeks ago         128.1 MB
hello-world               latest              c54a2cc56cbb        5 months ago        1.848 kB

Now if you create a new container based on this image, you will see it get downloaded from the Docker Hub:
# docker run -dP nickchase/rss-php-nginx:v1

Finally, test the new container by getting the new port&;
# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                    NAMES
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   6 seconds ago       Up 5 seconds        0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp   goofy_brahmagupta

&8230; and accessing the rss.php file.
curl http://localhost:32825/rss.php

You should see the same output as before.
Use a Dockerfile
Manually creating a new image from an existing container gives you a lot of control, but it does have one downside. If the base container gets updated, you&8217;re not necessarily going to have the benefits of those changes.
For example, suppose I wanted a container that always takes the latest version of the Ubuntu operating system and builds on that? The previous method doesn&8217;t give us that advantage.
Instead, we can use a method called the Dockerfile, which enables us to specify a particular version of a base image, or specify that we want to always use the latest version.  
For example, let&8217;s say we want to create a version of the rss-php-nginx container that starts with v1 but serves on port 88 (rather than the traditional 80).  To do that, we basically want to perform three steps:

Start with the desired of the base container.
Tell Nginx to listen on port 88 rather than 80.
Let Docker know that the container listens on port 88.

We&8217;ll do that by creating a local context, downloading a local copy of the configuration file, updating it, and creating a Dockerfile that includes instructions for building the new container.
Let&8217;s get that set up.

Create a working directory in which to build your new container.  What you call it is completely up to you. I called mine k8stutorial.
From the command line, In the local context, start by instantiating the image so we have something to work from:
# docker run -dP nickchase/rss-php-nginx:v1

Now get a copy of the existing vhost.conf file. In this particular container, you can find it at /opt/docker/etc/nginx/vhost.conf.  
# docker cp amazing_minksy:/opt/docker/etc/nginx/vhost.conf .
Note that I&8217;ve a new container named amazing_minsky to replace small_bassi. At this point you should have a copy of vhost.conf in your local directory, so in my case, it would be ~/k8stutorial/vhost.conf.
You now have a local copy of the vhost.conf file.  Using a text editor, open the file and specify that nginx should be listening on port 88 rather than port 80:
server {
   listen   88 default_server;
   listen 8000 default_server;
   server_name  _ *.vm docker;

Next we want to go ahead and create the Dockerfile.  You can do this in any text editor.  The file, which should be called Dockerfile, should start by specifying the base image:
FROM nickchase/rss-php-nginx:v1

Any container that is instantiated from this image is going to be listening on port 80, so we want to go ahead and overwrite that Nginx config file with the one we&8217;ve edited:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf

Finally, we need to tell Docker that the container listens on port 88:
FROM nickchase/rss-php-nginx:v1
COPY vhost.conf /opt/docker/etc/nginx/vhost.conf
EXPOSE 88

Now we need to build the actual image. To do that, we&8217;ll use the docker build command:
# docker build -t nickchase/rss-php-nginx:v2 .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM nickchase/rss-php-nginx:v1
—> 208c4fc237bb
Step 2 : EXPOSE 88
—> Running in 23408def6214
—> 93a43c3df834
Removing intermediate container 23408def6214
Successfully built 93a43c3df834
Notice that we&8217;ve specified the image name, along with a new tag (you can also create a completely new image) and the directory in which to find the Dockerfile and any supporting files.
Finally, push the new image to the hub:
# docker push nickchase/rss-php-nginx:v2

Test out your new image by instantiating it and pulling up the test page.
# docker run -dP nickchase/rss-php-nginx:v2
root@kubeclient:/home/ubuntu/tutorial# docker ps
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                                                                           NAMES
04f4b384e8e2        nickchase/rss-php-nginx:v2   “/opt/docker/bin/entr”   8 seconds ago       Up 7 seconds        0.0.0.0:32829->80/tcp, 0.0.0.0:32828->88/tcp, 0.0.0.0:32827->443/tcp, 0.0.0.0:32826->9000/tcp   goofy_brahmagupta
13a423324d80        nickchase/rss-php-nginx:v1   “/opt/docker/bin/entr”   12 minutes ago      Up 12 minutes       0.0.0.0:32825->80/tcp, 0.0.0.0:32824->443/tcp, 0.0.0.0:32823->9000/tcp                          amazing_minsky

Notice that you now have a mapped port for port 88 you can call:
curl http://localhost:32828/rss.php
Other things you can do with Dockerfile
Docker defines a whole list of things you can do with a Dockerfile, such as:

.dockerignore
FROM
MAINTAINER
RUN
CMD
EXPOSE
ENV
COPY
ENTRYPOINT
VOLUME
USER
WORKDIR
ARG
ONBUILD
STOPSIGNAL
LABEL

As you can see, there&8217;s quite a bit of flexibility here.  You can see the documentation for more information, and wsargent has published a good Dockerfile cheat sheet.
Moving forward
As you can see, creating new Docker images that can be used by you or by other developers is pretty straightforward.  You have the option to manually create and commit changes, or to script them using a Dockerfile.
In our next tutorial, we&8217;ll look at using YAML to manage these containers with Kubernetes.
The post How do I create a new Docker image for my application? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Why would a cloud expert be interested in a new DevOps tool?

I have been working on cloud infrastructures since before the hype over .
I have watched as the cloud matured from just being a way to get cheap Linux instances into a set of services that are literally changing the way we deliver IT forever. I have also witnessed the challenges enterprises face when moving their apps to the cloud and I am always looking for ways to make the migration easier.
As someone who has devoted the last 20 years to helping people create an increasingly adaptable IT infrastructure through new technologies, I can say I am fired up about our new Application Discovery and Delivery Intelligence (ADDI) product. You might wonder why a dyed-in-the-wool IT guy would be so chuffed about a new DevOps tool, but the answer is simple: this DevOps tool is precisely what enterprises need to find the powerful building blocks for their new, born-in-the-cloud applications.
Taking big apps to the cloud
Most IT shops have huge, monolithic applications. These applications have been refined over decades to provide functionality that gives their businesses an advantage over competitors.
While these applications have served businesses for decades, they don’t offer the flexibility people have come to expect from applications born in the cloud. Many companies want to rewrite these applications to make them compatible with born-in-the-cloud applications, but such rewrites are expensive, take time and introduce risk.
What companies need right now is the ability to break those huge, monolithic applications into their component parts. Those parts can be reused by born-in-the-cloud applications without having to go through the time, expense and risk of rewriting. This is a tremendous opportunity for corporations to take their business advantage to the cloud. The real trick now is to figure out which components in those monolithic applications are ripe for the picking to become the cloud services that businesses need.
Why ADDI?
This is where ADDI comes into play. This new tool helps businesses find the components that will be the cloud services of the future. It identifies the services which can easily become cloud services. It shows the dependencies that allow developers to understand how data flows and how data is managed through an application. It provides businesses with the intelligence needed to find the core services buried in their current applications so they can be unleashed for consumption by born-on-the-cloud applications.
Now you see why this cloud guy is really pumped about having a tool that enables businesses to build a path from their traditional IT environments to modern cloud environments without dismantling the current ones or rewriting everything.
ADDI provides enterprises with the ability to bring traditional IT assets into the 21st century world of cloud. This DevOps tool makes it easy to create new, enterprise-capable cloud applications in days instead of months. It’s DevOps product that can literally change the way businesses consume their IT services; that’s what excites this cloud guy most.
Learn more about IBM Application Discovery and Delivery Intelligence.
The post Why would a cloud expert be interested in a new DevOps tool? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud