CI Engineer

The post CI Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis is the leading global provider of Software and Services for OpenStack(TM), a massively scalable and feature-rich Open Source Cloud Operating System. OpenStack is used by hundreds of companies, including AT&T, Cisco, Symantec, NASA, Dell, PayPal and many more.Mirantis has more experience delivering OpenStack clouds to more customers than any other company in the world. We build the infrastructure that makes OpenStack work. We are proud to serve on the OpenStack Foundation Board and to be one of the top contributors to OpenStack.Mirantis is looking for a qualified candidate with experience in continuous integration, release engineering, or quality assurance, to join our CI Services team, which designs and implements CI/CD pipelines to build and test product artifacts and deliverables of the Mirantis Openstack distribution.Responsibilities:design and implement CI/CD pipelines,develop a unified CI framework based on existing tools (Zuul, Jenkins Job Builder, fabric, Gerrit, etc.),define and manage test environments required for different types of automated tests,drive cross-team communications to streamline and unify build and test processes,track and optimize hardware utilization by CI/CD pipelines,provide and maintain specifications and documentation for CI systems,provide support for users of CI systems (developers and QA engineers),produce and deliver technical presentations at internal knowledge transfer sessions, public workshops and conferences,participate in upstream OpenStack community, working together with OpenStack Infra team on common CI/CD tools and processes.Required Skills:Linux system administration &; package management, services administration, networking, KVM-based virtualization;scripting with Bash and Python;experience with the DevOps configuration management methodology and tools (Puppet, Ansible);ability to describe and document systems design decisions;familiarity with development workflows &8211; feature design, release cycle, code-review practices;English, both written and spoken.Will Be a Plus:knowledge of CI tools and frameworks (Jenkins, Buildbot, etc.);release engineering experience &8211; branching, versioning, managing security updates;understanding of release engineering and QA practices of major Linux distributions;experience in test design and automation;experience in project management;involvement in major Open Source communities (developer, package maintainer, etc.).What We Offer:challenging tasks, providing room for creativity and initiative,work in a highly-distributed international team,work in the Open Source community, contributing patches to upstream,opportunities for career growth and relocation,business trips for meetups and conferences, including OpenStack Summits,strong benefits plan,medical insurance.The post CI Engineer appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Running a MariaDB Galera Cluster on OpenShift

In this post, we’ll show you how you can run a MariaDB Galera Cluster on top of the OpenShift Container Platform. For making this possible we’re using a feature called PetSets, which is currently in the Tech Preview status in OpenShift. For those who don’t know it, MariaDB Galera Cluster is a multi-master solution for MariaDB/MySQL that provides an easy-to-use high-availability solution allowing zero-downtime maintenance and scalability.
Quelle: OpenShift

OpenShift Commons Briefing #56: Implementing NGINX Microservice Architecture with OpenShift

In this briefing, NGINX’s Chief Architect, Chris Stetson gave an excellent introduction to Microservices architectures. Drawing on his past experiences to illustrate the shift from monolith to microservices, Chris then dove into the networking problems that arise with these new architectural models such as service discovery, load balancing, and encryption.
Quelle: OpenShift

Software Engineer (.Net)

The post Software Engineer (.Net) appeared first on Mirantis | The Pure Play OpenStack Company.
Agilent Technologies is a leader in life sciences, diagnostics and applied chemical markets. The company provides laboratories worldwide with instruments, services, consumables, applications and expertise, enabling customers to gain the insights they seek. Agilent focuses its expertise on six key markets: Food, Environmental and Forensics, Pharmaceutical, diagnostics, Chemical and Energy, Research.The purpose of Agilent Research Laboratories is to power Agilent’s growth through breakthrough science and technology. To complement their product line R&D, Agilent Labs looks beyond the evolution of current products and platforms to create the technologies that will underlie tomorrow’s breakthroughs, enabling Agilent customers to answer questions at the leading edge of life science, diagnostics and the applied markets.For more details about Agilent Technologies please see: http://www.agilent.com/about/companyinfo/index.htmlToday Mirantis and Agilent Technologies are looking for an experienced Software Engineer/Senior Software Engineer, who would join our distributed team (we have engineers in California, Russia, Ukraine).Our development center works on different projects for Agilent’s Life Science department. Such as:OpenLAB Shared Services, an integration platform for different types of software for chemical analysis and chemical data processing.Content Management systems for storing of scientific data.CDS Installer for deployment, upgrade and configuration of OpenLAB Chromatography Data Systems in distributed laboratories.For more details about Agilent products please see: http://www.agilent.com/en-us/products/software-informatics/openlab-software-suiteResponsibilities:Design and develop key components of different Agilent’s products (using: C#, .Net, WPF, WCF, ASP.Net various databases);Work closely with Agilent employees from USA and Europe in a collaborative development environment;Introduce and maintain best development lifecycle practices. Such as: code-review, continuous integration, automated tests etc.;Troubleshoot problems as needed in the QA and production environments;Requirements:2+ years of experience in .Net development and testing;Clear understanding of .NET framework platform;Excellence in software engineering practices and coding;Strong background in object oriented design, data structures, algorithms;Understanding of database technologies;Experience using of Build systems  (Maven/MsBuild/Nant/&;);Proficient in written English, spoken English;ASP.NET or WPF experience would be a plus;Desired:Experience using of Source Control systems (Git);Experience with issue-tracking systems: Jira, TeamTrack;We offer:Chance to contribute to Silicon Valley software development;Modern office, comfortable work environment, the best tools;Competitive salary (after interview);Career and professional growth;20-working days paid vacation, 100% paid sick list;Medical insurance;Benefit program;Flexible schedule;Friendly atmosphere.The post Software Engineer (.Net) appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Cloud trends: Solving infrastructure management in the transition to strategic IT

Has your department started its shift toward more strategic IT? If you have, you’ve probably also faced the question, “How will I manage my business infrastructure to optimize performance?”
Some of your peers have found an effective solution: managed cloud services.
In a recent Frost & Sullivan survey, 52 percent of IT decision makers said that a lack of in-house expertise hampers their cloud implementations. Of those using cloud today, 11 percent said finding qualified staff is an issue, and 91 percent are seeking outside assistance to deploy their clouds. Of those seeking help, some turn to managed service providers, and rightly so. Managed cloud services can offer big benefits as you balance strategic projects with the need to manage infrastructure.
What are managed cloud services?
Managed cloud services create a partnership between your business and a cloud service provider that extends beyond service provisioning.
In a managed services relationship, the provider contributes cloud technology, infrastructure and expertise while you retain control and oversight of application performance. It’s a best-of-both-worlds scenario. You gain expert assistance to deploy and manage your infrastructure without having to worry about using internal resources to physically deploy, manage and optimize it. The beauty is that you don’t relinquish workload control; you work with the provider to ensure that the infrastructure supports the performance you need to ensure optimal service operation.
For many CIOs, the benefits of managed cloud services are clear. In an atmosphere where your focus is on fast response to business needs, managed service providers offer experts who optimize your infrastructure to ensure application performance with high levels of security and compliance and guaranteed service-level agreements (SLAs).
Above all, businesses are seeking managed cloud providers that will work to align their services with business outcomes, and provide SLAs that guarantee it. Need infrastructure that bursts to accommodate a specific throughput to enable online sales? Your managed service provider should be able to accommodate such a request. If not, you should be looking elsewhere.
What should you look for in providers?
Frost & Sullivan has identified seven key criteria that you should look for when considering managed cloud providers who will meet your business needs, today and in the future. These include:

Core expertise in infrastructure and workload
Support for multiple hardware types and hypervisors
Hybrid management expertise
Customizable services and SLAs
Robust security features
Compliance assurance and related reporting
A robust portfolio of managed services

As your IT department shifts from the role of asset manager to strategic IT business driver, it’s critical to focus your resources wisely. For many CIOs, this means turning to managed cloud service experts. Managed cloud partners handle routine management tasks through experts with the know-how to tailor configurations to ensure optimal function, giving your business the best platform on which to succeed.
For more information on how to use managed cloud services as you shift to a strategic IT model, read our whitepaper titled “Cloud-Based Managed Services: Tips for Selecting a Provider that Can Help You Re-Tool Your IT Department.”
The post Cloud trends: Solving infrastructure management in the transition to strategic IT appeared first on news.
Quelle: Thoughts on Cloud

The Dollars and Cents of How to Consume a Private Cloud

The post The Dollars and Cents of How to Consume a Private Cloud appeared first on Mirantis | The Pure Play OpenStack Company.
In my blog, how does the world consume private clouds?, we reviewed different ways to consume private cloud software:

Do-it-yourself (DIY)
Software distribution from a vendor
Managed service (your hardware & datacenter, software managed by a vendor)
Managed & hosted service (hardware, software, datacenter all outsourced)

Let’s look at the economics of the first three alternatives. Rather than an absolute total-cost-of-ownership (TCO) analysis, we will focus on a relative comparison where line items that are identical in all three scenarios, e.g., hardware costs, will be removed.
Of course, cost is not the only criteria in choosing your consumption model; there are other criteria, such as the ability to recruit OpenStack talent, long-term strategic interests, customizations required, and so on, but these topics are not covered in this blog.
DIY
This initially appears to be a no-brainer option. After all, isn’t open-source free software? Doesn’t one just download, install and be on their merry way? Unfortunately not &; open-source software provides numerous benefits such as higher innovation velocity, ability to influence direction and functionality, elimination of vendor lock-in and short-circuiting standards by defining APIs, drivers and plugins. But “free” is not a benefit mainly because open-source projects are not finished products. Below are typical costs incurred in a DIY scenario based on the numerous customers we have had the opportunity to work with who initially tried DIY OpenStack.

Cost
Representative Breakdown

Fixed size engineering team of 13 engineers
(Size independent of cloud scale)
5 Upstream engineers (to fix bugs, work on features, create reference architecture)
5 QA engineers (to package, QA & do interop testing)
3 Lifecycle tooling & monitoring engineers

Fixed size IT/OPS team of 9 engineers
(Size independent of cloud scale)
1 IT architect (to architect, do capacity planning)
1 L3 engineer (troubleshooting)
2 L2 engineers (to deploy, update, upgrade, and do ongoing management)
5 L1 engineers (to monitor, look at basic issues, respond to tenant requests)

Variable size engineering team of 1.1 person per 100 nodes and 1.1 person per 1PB storage
(Size depends on cloud scale, kicks-in only when past fixed size minimums &; so no double counting)
Compute:
0.3 IT/OPS architects per 100 nodes
0.1 L3 IT/OPS engineer per 100 nodes
0.3 L2 IT/OPS engineer per 100 nodes
0.4 L1 IT/OPS engineer per 100 nodes
Storage:
0.3 IT/OPS architects per 1PB storage
0.1 L3 IT/OPS engineer per 1PB storage
0.3 L2 IT/OPS engineer per 1 PB storage
0.4 L1 IT/OPS engineer per 1 PB storage

Dev/ Test cloud
$50,000 depreciated across 3 years required to test updates, upgrades, configuration changes etc.

Loss of availability
A DIY cloud typically has a lower availability than alternatives. Once you calculate the number of minutes of cloud downtime per year, you can multiply this by the margin loss per minute.
E.g. for 98% cloud availability and $50 loss per minute of cloud downtime equates to a loss of $525,600 per year.

Production delays
A DIY cloud typically takes longer to implement, delaying a production deployment.
E.g. for 6 months of delay and each month causing the business $50,000 of loss, that equates to $300,000 of one-time loss.

 
Software Distribution from a Vendor
In this consumption model, the engineering burden is shifted to the vendor, but the IT/OPS task resides with the user. The costs look like follows:

Cost
Representative Breakdown

Fixed size IT/ OPS team of 3.5 engineers
(Size independent of cloud scale, the team is much smaller than in the DIY case because there is a vendor to take support calls)
0.5 IT architect (to architect, do capacity planning)
1 L2 engineers (to deploy, update, upgrade, ongoing management)
2 L1 engineers (to monitor, look at basic issues, respond to tenant requests)

Variable size engineering team of 1 person per 100 nodes and 1 person per 1PB storage
(Size varies depending on cloud scale, kicks-in only when past fixed size minimums &8211; so no double counting)
Compute:
0.3 IT/OPS architects per 100 nodes
0.3 L2 IT/OPS engineer per 100 nodes
0.4 L1 IT/OPS engineer per 100 nodes
Storage:
0.3 IT/OPS architects per 1PB storage
0.3 L2 IT/OPS engineer per 1 PB storage
0.4 L1 IT/OPS engineer per 1 PB storage

Dev/Test cloud
$50,000 depreciated across 3 years required to test updates, upgrades, configuration changes etc.

Loss of availability
A cloud based on a distro typically has better availability than DIY. Once you calculate the number of minutes of cloud downtime per year, you can multiply this by the margin loss per minute.
E.g. for 99.5% cloud availability and $50 loss per minute of cloud downtime equates to a loss of $262,800 per year.

Software support costs
In lieu of the internal engineering team, in this scenario, there is a support cost payable to the vendor.

 
Managed Service from a Vendor
Here the engineering and IT/OPS burden for the software is shifted to the vendor. The costs look like follows:

Cost
Representative Breakdown

Loss of availability
A managed cloud typically offers the highest availability of the three options. Once you calculate the number of minutes of cloud downtime per year, you can multiply this by the margin loss per minute.
E.g. for 99.9% cloud availability and $50 loss per minute of cloud downtime equates to a loss of $52,560 per year.

Managed services costs
In lieu of the internal engineering & IT/OPS team, in this scenario, there is a managed service fee payable to the vendor.

 
The Bottom Line
Here are results of 3 scenarios we ran:

Relative Costs (4 year timeline)

Initial number of VMs
3,000
20,000
60,000

DIY cost/VM
$1,448
$249
$118

Distro cost/VM
$614
$179
$124

Managed cloud cost/VM
$298
$189
$149

The net-net is that for small clouds, managed is a very attractive option. For mid-size clouds a distribution may be more cost effective. For the largest clouds, DIY might be the least expensive option assuming the IT team can keep the availability reasonably high 98.5% or higher.
The post The Dollars and Cents of How to Consume a Private Cloud appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Cloud Solutions Architect

The post Cloud Solutions Architect appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis helps top enterprises build and manage private cloud infrastructure using OpenStack and related open source technologies.The company is the top contributor of open source code to OpenStack project and follows a build-operate-transfer model to deliver its OpenStack distribution and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in.To date Mirantis has helped over 200 enterprises build and operate some of the largest OpenStack clouds in the world. Its customers include iconic brands like AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen.As the Solution Architect you will be working with product managers, project managers, engineering teams, customers and sales teams to develop cloud solutions based on the OpenStack platform. You will be considering business, technical and operational aspects of our customers’ cloud strategy. The work will span networking, cloud infrastructure (IaaS), applications (SaaS) and platforms (PaaS). You will be engaged in large projects, working through analysis, design and implementation with project teams to deliver variable cloud solutions to Mirantis customers.  Responsibilities:Work with the Customer to understand project goals, define requirements and scope.Analyze gaps in existing data center management infrastructure.Create design for OpenStack-based cloud implementations.Propose integration approach with existing services and applications.Perform capacity planning for the cloud and provide hardware recommendations.Work with engineering teams to validate implementation details.Create cloud solution blueprints and white papers.Oversee project execution usually carried out by remote engineering teams.Develop personal relationships with executives, engineering teams and customers.Your profile:B.Sc. or higher in Computer Science (or similar)Strong understanding of domain, key players and technologiesSuccessfully developed distributed systems and reliable, fault tolerant softwareSolid software development experience using Agile methodologiesExperience in building high-availability (HA) production-grade solutions in virtualized environments.Hands-on experience with Open Source frameworks (e.g., CloudFoundry, Cloudify, OpenShift, Hadoop).Experience with messaging solutions (e.g., ActiveMQ, RabbitMQ, ZeroMQ)Expert knowledge of MySQL and/or PostgreSQL in HA deploymentsExperience working with Active Directory (AD) deployments for business critical applications.Practical experience with Linux/Unix system administration and troubleshootingStrong analytic and problem solving skills (e.g., capacity planning).Good interpersonal/communication skills.What We Offer:High-energy atmosphere of a young companyBuild large scale, innovative systems for mission-critical useCollaborate with exceptionally passionate, talented and engaging colleaguesCompetitive compensation packageLots of freedom for creativity and personal growth.The post Cloud Solutions Architect appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

VMware and IBM: Accelerating enterprise hybrid cloud

Earlier this year, IBM and VMware jointly set out to tackle the challenge of extending existing VMware workloads from on-premises environments to the cloud without incurring the cost and risk associated with retooling operations, rearchitecting applications and redesigning security policies.
This collaboration is significant because it gives customers flexibility and transparency when moving workloads into the public cloud. With an extremely straightforward and simple approach, users can easily move and implement enterprise applications and disaster recovery solutions across a global network of cloud data centers. Enterprises already familiar with VMware can extend existing on-premises VMware infrastructure into IBM Cloud with simplified, monthly pricing. This makes transitioning into a hybrid model faster and easier because no changes are needed to the underlying workloads and enterprises can directly leverage there existing skills and tooling.
Enterprise adoption of VMware environments on IBM Cloud
Due to these strengths, the IBM and VMware partnership has taken off. More than 1,000 joint customers to date are moving their VMware environments to IBM Cloud including Marriott International, Clarient Global LLC and Monitise to name a few. Enterprises require choice, and VMware is delivering on that promise. Together, VMware and IBM are demonstrating real business results across a broad set of enterprises covering all industries.
So why IBM for VMware workloads? IBM is first to provide VMware Cloud Foundation as a fully-automated service as part of this joint partnership and one of the first to provide VMware vCloud Availability for vCloud Director. With VMware Cloud Foundation on IBM Cloud, organizations get access to a complete VMware SDDC inclusive of compute, storage and network virtualization and software lifecycle automation running in the IBM Cloud. VMware vCloud Availability for vCloud Director offers simple, cost-effective, cloud-based disaster recovery services that seamlessly support their customers’ vSphere environments.
IBM has mobilized 4,000 global service consultants with the expertise required to help VMware customers leverage IBM Cloud and provide a full portfolio of lifecycle services including planning, architecture, migration and end-to-end management. IBM also has a broad ecosystem of partners that support the VMware solutions on IBM Cloud, including HyTrust, Veeam and others.
Our partnership builds upon the strengths of both companies. VMware is relied upon by virtually every large enterprise today, including 100 percent of the Fortune 100. Now these organizations can easily and securely extend these workloads into IBM’s global public cloud made of over 50 cloud data centers, all connected by a safe, reliable, high-speed network. IBM is one of the largest operators of VMware workloads, and is recognized as a hybrid cloud leader by several analyst firms. Finally, once on the IBM Cloud, enterprises can take advantage IBM’s full catalog of cloud services including Bluemix application development services, DevOps, object storage, bare metal, databases, analytics, Watson, blockchain and Internet of Things.
This journey is still in its early stages, but the possibilities are endless. What is clear is that IBM and VMware partnership is helping clients to more easily adopt and derive hybrid cloud benefits in a way that is unique in the industry today.
We’re just getting started, so stay tuned. To find out more, visit the IBM Cloud for VMware Solutions web site.
The post VMware and IBM: Accelerating enterprise hybrid cloud appeared first on news.
Quelle: Thoughts on Cloud

Why Bluebee chose IBM as a strategic partner

It is essential for startups to build strategic partnerships with key players in the market.
For my company, Bluebee, a relatively small outfit, larger, key players do not necessarily come to mind as logical partners for success. Bluebee offers cloud-based services for genomics analytics on a global scale for research labs, clinical users, diagnostics companies, and next-generation sequencing (NGS) service providers. Data security, compliance and local regulations are critical to our business, given our background in financial technology and high-value payments.
The requirements for our technological architecture to support a global service were crystal clear: we needed the “real cloud,” a true, global cloud solution in which multiple data centers in several remote geographies virtually act as one pool of resources and are capable of elastically provisioning bare metal infrastructure. Today, IBM SoftLayer is the only provider capable of providing us with this service.
Here’s why:
A true cloud partnership
On our journey with IBM SoftLayer, we engaged in the early stages of product design with IBM Power Systems using IBM POWER8 technology. Our team members traveled to the IBM Austin labs to jointly research how to maximize throughput within our infrastructure. Very quickly, we realized that hosting our genomics analytics workloads on POWER8 in SoftLayer garnered the fastest results. We observed a substantial increase in performance when using POWER8 and FPGA&;s (field-programmable gate arrays) over x86.
The IBM partnership provided us with the exceptional computing capabilities our business demands for our customer’s data intensive workloads.
Flexibility to support genomics demands
Partnering with IBM also allowed us to become an early adopter of the FASP (Aspera) protocol in the genomics domain. Together, we now collaborate to provide our high-volume customers with a fast, flexible, highly scalable, on-demand software solution that overcomes the challenges of large genomic data transfer issues.
The largest Bluebee client samples are from cancer centers, and are typically upwards of 360 gigabytes per patient. The integration of Aspera’s patented FASP transfer technology now reduces the critical end-to-end turnaround time of computational data analysis while ensuring high-speed and reliable data transfers. This is particularly critical as Bluebee offers high-performance NGS data analysis solutions in 22 data centers across the globe. The combined technologies enable faster and more efficient patient diagnosis and treatment decisions.
Our next combined venture was with IBM Cloud Object Storage, which has a unique offering of resilient object storage across data centers. We again met an IBM agile team, ready to support and help us with a very competitive, long-term storage solution for our customers. In less than a month, we had jointly designed a solution that met our clients’ needs.
Even as a relatively small startup, our search for the right partner ended with a large, key player in the industry. From the beginning, we were focused on an unwavering goal: a global, competitive, state-of-the-art and secure solution for genomics analytics. Our aspirations and the ability of IBM to provide and support a multifaceted solution allowed us to form a truly strategic, successful partnership.
Learn more about infrastructure as a service.
The post Why Bluebee chose IBM as a strategic partner appeared first on news.
Quelle: Thoughts on Cloud