Uber Rallies Drivers Against Teamster Unionization Efforts With Podcasts And Pizza Parties

As Uber scrambles to address an internal scandal over employee allegations of systemic sexism, it’s facing another, increasingly heated labor battle in Seattle — a union drive led by the Teamsters.

The ride-hail giant has opposed the unionization movement in Seattle since it began in late 2015. While the Teamsters worked to win the city approvals necessary to represent drivers, Uber ramped up a sprawling phalanx of anti-union efforts, including everything from in-app notifications and text messages to in-person seminars, collective bargaining pizza parties, and Teamster-critical podcasts.

Just this week, an alert sent via Uber’s driver app warned drivers that Teamsters had been granted “approval to begin pressuring drivers for support” and directed them to information on how they might “protect [their] freedom.”

An in-app message sent to drivers from Uber warning them about the risk of joining the Teamsters union.

An in-app message sent to drivers from Uber warning them about the risk of joining the Teamsters union.

The gist of Uber&;s argument against the Teamsters: The organization isn’t qualified to represent Uber driver interests because of past efforts to cap the number of ride-hail drivers on the streets in Seattle and otherwise hamstring drivers.

Caleb Weaver, who runs public affairs for Uber in Washington, says the company has good reason to believe this. “Right now as an independent driver, drivers have the ability to decide when, where, and how much they want to drive,” he said. “If there are a series of new requirements imposed on the conditions of work, including things like minimum hours of driving, there will be a loss of control for drivers.”

Unions are typically reserved for employees, and Uber drivers are not employees of the company, but independent contractors; the Seattle collective bargaining ordinance is therefore unique, and the yearlong process of hammering out how a union of independent contractors would actually work has been understandably fraught.

Uber, which sued Seattle to block the ordinance, has been aggressively broadcasting its view in recent weeks. The company deployed a form-letter tool that allows drivers to email half a dozen elected officials in Seattle asking them to “deny the Teamsters’ application” to represent Uber drivers. It has also been holding in-person seminars on unionization efforts that frame the Teamsters as “the opponent of the independent driver” and an organization that “fights against” driver interests.

Lisa, a Seattle driver who attended one of Uber&039;s recent anti-Teamster meetings, said it played host to a mix of sentiments, with some drivers speaking out against unionization efforts, some interested in hearing directly from the Teamsters, and others airing grievances against Uber. “If I felt Uber had mistreated me, or violated civil rights or had workplace rights issues, then I would be fully supportive of a union,” Lisa told BuzzFeed News. “But I don&039;t feel that way about Uber.”

Another driver, who asked to remain anonymous out of concern for his future employment opportunities, has never been in a union before but supports the idea wholeheartedly.

“Uber’s treatment of the drivers is one-sided and abusive,” he told BuzzFeed News. “Uber holds all the power, and the drivers are voiceless.”

In New York, frustrated Uber drivers have found a voice in the Independent Drivers Guild (IDG), an Uber-endorsed labor organization run by the Machinists Union. “It&039;s unfortunate that Uber has taken an anti-union approach in Seattle,” said IDG founder Jim Conigliaro Jr. “They should afford workers a voice as they have here in New York. The dismissive attitude toward drivers who are simply trying to make a living is nothing new.”

An Uber spokesperson was unable to say whether the Drive Forward group in Seattle — co-founded by Uber and a group called Eastside for Hire — might one day resemble the IDG.

The Teamsters, meanwhile, reject claims that they’re wresting control away from drivers. “Uber drivers, like all Teamster members, will have the opportunity to negotiate, review, and vote on their contract before it goes into effect,” said Seattle Teamsters representative Dawn Gearhart. “Drivers have the final say on whether or not it makes sense to have a union, and they won’t approve an agreement that goes against their self-interest.”

For some Uber drivers drawn to the platform by its promise of a boss-free job with flexible hours, union membership — which comes with dues and a hierarchical power structure — can be off-putting. Said Fredrick Rice, an Uber driver who has appeared on the company&039;s podcast, “There is nothing to be gained by Uber drivers being incarcerated into a union, other than money into the union coffers.”

Uber isn’t the only startup that has a problem with how the driver-for-hire union effort has unfolded in Seattle; Lyft drivers are also impacted, and the company likewise opposes it. In a statement to BuzzFeed News, Lyft described the collective bargaining ordinance as “an unfair, undemocratic process,” arguing that a “significant percentage of drivers will be disenfranchised” because the collective bargaining ordinance currently only allows drivers who work a certain amount to vote for representation.

Members of Seattle’s city council have not yet responded to a request for comment. The city will be holding its next hearing on the ride-hail collective bargaining issue on March 21; Uber’s current lawsuit against the city is scheduled to be heard in court on March 17.

Quelle: <a href="Uber Rallies Drivers Against Teamster Unionization Efforts With Podcasts And Pizza Parties“>BuzzFeed

Google Cloud Platform: your Next home in the cloud

By Brian Stevens, Vice President, Cloud Platforms

San Francisco — Today at Google Cloud Next ‘17, we’re thrilled to announce new Google Cloud Platform (GCP) products, technologies and services that will help you imagine, build and run the next generation of cloud applications on our platform.

Bring your code to App Engine, we’ll handle the rest
In 2008, we launched Google App Engine, a pioneering serverless runtime environment that lets developers build web apps, APIs and mobile backends at Google-scale and speed. For nearly 10 years, some of the most innovative companies built applications that serve their users all over the world on top of App Engine. Today, we’re excited to announce into general availability a major expansion of App Engine centered around openness and developer choice that keeps App Engine’s original promise to developers: bring your code, we’ll handle the rest.

App Engine now supports Node.js, Ruby, Java 8, Python 2.7 or 3.5, Go 1.8, plus PHP 7.1 and .NET Core, both in beta, all backed by App Engine’s 99.95% SLA. Our managed runtimes make it easy to start with your favorite languages and use the open source libraries and packages of your choice. Need something different than what’s out of the box? Break the glass and go beyond our managed runtimes by supplying your own Docker container, which makes it simple to run any language, library or framework on App Engine.

The future of cloud is open: take your app to-go by having App Engine generate a Docker container containing your app and deploy it to any container-based environment, on or off GCP. App Engine gives developers an open platform while still providing a fully managed environment where developers focus only on code and on their users.

Cloud Functions public beta at your service
Up one level from fully managed applications, we’re launching Google Cloud Functions into public beta. Cloud Functions is a completely serverless environment to build and connect cloud services without having to manage infrastructure. It’s the smallest unit of compute offered by GCP and is able to spin up a single function and spin it back down instantly. Because of this, billing occurs only while the function is executing, metered to the nearest one hundred milliseconds.

Cloud Functions is a great way to build lightweight backends, and to extend the functionality of existing services. For example, Cloud Functions can respond to file changes in Google Cloud Storage or incoming Google Cloud Pub/Sub messages, perform lightweight data processing/ETL jobs or provide a layer of logic to respond to webhooks emitted by any event on the internet. Developers can securely invoke Cloud Functions directly over HTTP right out of the box without the need for any add-on services.

Cloud Functions is also a great option for mobile developers using Firebase, allowing them to build backends integrated with the Firebase platform. Cloud Functions for Firebase handles events emitted from the Firebase Realtime Database, Firebase Authentication and Firebase Analytics.

Growing the Google BigQuery universe: introducing BigQuery Data Transfer Service
Since our earliest days, our customers turned to Google to promote their advertising messages around the world, at a scale that was previously unimaginable. Today, those same customers want to use BigQuery, our powerful data analytics service, to better understand how users interact with those campaigns. With that, we’ve developed deeper integration between broader Google and GCP with the public beta of the BigQuery Data Transfer Service, which automates data movement from select Google applications directly into BigQuery. With BigQuery Data Transfer Service, marketing and business analysts can easily export data from Adwords, DoubleClick and YouTube directly into BigQuery, making it available for immediate analysis and visualization using the extensive set of tools in the BigQuery ecosystem.

Slashing data preparation time with Google Cloud Dataprep
In fact, our goal is to make it easy to import data into BigQuery, while keeping it secure. Google Cloud Dataprep is a new serverless browser-based service that can dramatically cut the time it takes to prepare data for analysis, which represents about 80% of the work that data scientists do. It intelligently connects to your data source, identifies data types, identifies anomalies and suggests data transformations. Data scientists can then visualize their data schemas until they’re happy with the proposed data transformation. Dataprep then creates a data pipeline in Google Cloud Dataflow, cleans the data and exports it to BigQuery or other destinations. In other words, you can now prepare structured and unstructured data for analysis with clicks, not code. For more information on Dataprep, apply to be part of the private beta. Also, you’ll find more news about our latest database and data and analytics capabilities here and here.

Hello, (more) world
Not only are we working hard on bringing you new products and capabilities, but we want your users to access them quickly and securely — wherever they may be. That’s why we’re announcing three new Google Cloud Platform regions: California, Montreal and the Netherlands. These will bring the total number of Google Cloud regions up from six today, to more than 17 locations in the future. These new regions will deliver lower latency for customers in adjacent geographic areas, increased scalability and more disaster recovery options. Like other Google Cloud regions, the new regions will feature a minimum of three zones, benefit from Google’s global, private fibre network and offer a complement of GCP services.

Supercharging our infrastructure . . .
Customers run demanding workloads on GCP, and we’re constantly striving to improve the performance of our VMs. For instance, we were honored to be the first public cloud provider to run Intel Skylake, a custom Xeon chip that delivers significant enhancements for compute-heavy workloads and a larger range of VM memory and CPU options.

We’re also doubling the number of vCPUs you can run in an instance from 32 to 64 and now offering up to 416GB of memory, which customers have asked us for as they move large enterprise applications to Google Cloud. Meanwhile, we recently began offering GPUs, which provide substantial performance improvements to parallel workloads like training machine learning models.

To continually unlock new energy sources, Schlumberger collects large quantities of data to build detailed subsurface earth models based on acoustic measurements, and GCP compute infrastructure has the unique characteristics that match Schlumberger’s needs to turn this data into insights. High performance scientific computing is integral to its business, so GCP’s flexibility is critical.

Schlumberger can mix and match GPUs and CPUs and dynamically create different shapes and types of virtual machines, choosing memory and storage options on demand.

“We are now leveraging the strengths offered by cloud computation stacks to bring our data processing to the next level.” — Ashok Belani, Executive Vice President Technology, Schlumberger

. . . without supercharging our prices
We aim to keep costs low. Today we announced Committed Use Discounts that provide up to 57% off the list price on Google Compute Engine, in exchange for a one or three year purchase commitment. Committed Use Discounts are based on the total amount of CPU and RAM you purchase, and give you the flexibility to use different instance and machine types; they apply automatically, even if you change instance types (or size). There are no upfront costs with Committed Use Discounts, and they are billed monthly. What’s more, we automatically apply Sustained Use Discounts to any additional usage above a commitment.

We’re also dropping prices for Compute Engine. The specific cuts vary by region. Customers in the United States will see a 5% price drop; customers in Europe will see a 4.9% drop and customers using our Tokyo region an 8% drop.

Then there’s our improved Free Tier. First, we’ve extended the free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and on your own schedule. Second, we’re introducing new Always Free products — non-expiring usage limits that you can use to test and develop applications at no cost. New additions include Compute Engine, Cloud Pub/Sub, Google Cloud Storage and Cloud Functions, bringing the number of Always Free products up to 15, and broadening the horizons for developers getting started on GCP. Visit the Google Cloud Platform Free Tier page today for further details, terms, eligibility and to sign up.

We’ll be diving into all of these product announcements in much more detail in the coming days, so stay tuned!
Quelle: Google Cloud Platform

Portal Preview of Azure Resource Policy

Since the first release of resource policies last April, we have received valuable feedback from customers and with this feedback we have added new features. I’m pleased to announce the following new features for Azure Resource Policies:

Policy management in portal (preview)
Policy with parameters

Policy Management in Portal

Many customers requested the ability to manage policies through the Azure portal. Using the portal reduces the learning curve for creating policies and makes managing the policies easier. It is now available in Azure preview portal.

Similar to working with Identity and Access Control, you can configure resource policies for subscriptions and resource groups from the settings menu. You can view what policies are assigned to the current subscriptions and resource groups, and add new policy assignments. For common policies, you can use the built-in policies and customize the values you need. For example, when creating a geo-compliance policy, the UI simply asks you for a list of permitted locations. You can provide the name and a description that are seen by users when they violate the policy.

 

Figure 1: View all policy assignments

 

Figure 2: Adding new policy assignment

Policy using Parameters

With API version 2016-12-01, you can add parameters to your policy template. The parameters enable you to customize the policy definition. The preceding example for the portal utilizes parameters in the policy. There are two benefits:

Reduce the number of policy definitions to manage. For example, you previously needed multiple policies to manage tags for different applications in different resource groups. Now, you can consolidate them into one policy definition with tag name as a parameter. You provide the value of the tag name when you assign the policy to the application.
Separate access control for policy definition and policy management. Previously, if you used resource groups as the scope for most of your policy assignments, all users who assigned a policy to a resource groups also needed permission to create policy definitions. This permission was required because different assignments required different policy definitions. However, granting this permission created the risk that they could potentially modify other policy definitions. By using parameters, users no longer need to create their own policy definitions.

{
"properties": {
"displayName": "Allowed virtual machine SKUs",
"policyType": "BuiltIn",
"description": "This policy enables you to specify a set of virtual machine SKUs that your organization can deploy.",
"parameters": {
"listOfAllowedSKUs": {
"type": "Array",
"metadata": {
"description": "The list of SKUs that can be specified for virtual machines.",
"displayName": "Allowed SKUs",
"strongType": "VMSKUs"
}
}
},
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/virtualMachines"
},
{
"not": {
"field": "Microsoft.Compute/virtualMachines/sku.name",
"in": "[parameters(&;listOfAllowedSKUs&039;)]"
}
}
]
},
"then": { "effect": "Deny" }
}
},
"id": "/providers/Microsoft.Authorization/policyDefinitions/cccc23c7-8427-4f53-ad12-b6a63eb452b3",
"type": "Microsoft.Authorization/policyDefinitions",
"name": "cccc23c7-8427-4f53-ad12-b6a63eb452b3"
}

 

Since this policy is built-in, you can directly assign it without creating your policy definition JSON. To assign this policy using PowerShell, run the following commands:

$policydefinition = Get-AzureRmPolicyDefinition | Where-Object {$_.Properties.DisplayName -like "Allowed virtual machine SKUs"}
New-AzureRmPolicyAssignment -Name testassignment –Scope {scope} -PolicyDefinition $policydefinition -listOfAllowedSKUs "Standard_LRS", "Standard_GRS"

It is this simple now!

Help us improve the experience

 

Please try the new features and provide feedback to us through the user voice. Let us know what policies you want to use and how we can improve the experience.
Quelle: Azure

Service management in the age of hybrid cloud

Disruption has become as commonplace in business as the number of taxis and hotels that once defined urban travel. Then came Uber and Airbnb, innovative, digitally-driven companies at the vanguard of the new sharing economy.
The evidence that top companies believe their business will be significantly disrupted in the next three years is everywhere. Clearly, nearly every industry must be focused on the future and become increasingly digital to survive and thrive in this new environment.
An IBM study confirms that hybrid cloud is the necessary IT platform for the future and cloud adoption is growing rapidly. More than 1,000 C-suite executives from 18 industries revealed that most companies are using cloud, but not for their entire business. 78 percent said their cloud initiatives are coordinated or fully integrated currently, compared to 34 percent in 2012.
A hybrid service approach is needed to support hybrid cloud adoption. Both blend old and new. The approach will incorporate cloud, collaborative and cognitive capabilities with still-relevant traditional service management best practices and processes to become both agile and efficient.
Hybrid cloud and bimodal IT
The hybrid cloud model uses a mix of private and public cloud services to create more compelling customer experiences and develop innovative business models. Hybrid cloud allows fit-for-purpose and bimodal approaches. The two modes are:

Agile mode systems of engagement that touch users and systems of innovation typically reside in cloud-native environment. You leverage these systems to rapidly roll out and iteratively improve apps. They’re often supported by DevOps teams and methods.

Traditional mode systems of record and systems of automation are bedrock capabilities that typically reside in private cloud. The rate of change is slower. Support is often from more traditional IT operations teams and models like Information Technology Infrastructure Library (ITIL).

Hybrid cloud applications span both modes and can go from mobile to mainframe.
Hybrid cloud service management
Hybrid cloud differs greatly from traditional service management. Applications are more decentralized and tools more varied, ranging from cloud-native to private cloud. Support spans DevOps and traditional IT operations teams.
Organizations must bring together the best of agile and traditional modes to address their company’s unique needs. DevOps is agile and collaborative but will not always bring the scalability and efficiency IT operations requires. Traditional service models like ITIL have still-relevant best practices and processes, but must adapt to the times.
Hybrid service management is changing. Hybrid cloud service management is helping IT support critical business initiatives in exciting new ways. These changes are propelling the trends and directions for hybrid service management, and will be the focus at IBM InterConnect, March 19th-23rd.
Succeeding in the hybrid world
Success typically involves having an overall vision, but implementing in stages. Hybrid service management will not be different. Successful organizations can take these steps:

Get agile by embracing the best of DevOps. This includes leveraging cloud-native tools and toolchains to ensure continuous delivery is backed by continuous availability.
Collaborate across teams with shared context. DevOps and operations teams must work together to support hybrid applications. This is best enabled by a lean set of common capabilities, including collaboration and notification services.
Structure hybrid operations to scale. As hybrid applications become larger and more complex, organizations become swamped by a wave of alerts and notifications. Implement effective event management and resolution automation.
Utilize cognitive insights for efficient operations. The hybrid cloud world depends on cognitive capabilities for actionable insights and proactive operations.

If you are attending InterConnect in Las Vegas, come see the IT Service Management trends and directions session. You can learn more about how IBM and smart organizations are approaching hybrid service management with clients and IBM hybrid cloud executives. Additionally, you can stop by the IBM hybrid service management Concourse area to see demos and talk to experts. I hope to see you there.
The post Service management in the age of hybrid cloud appeared first on news.
Quelle: Thoughts on Cloud

Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition

Last week, and jointly announced a strategic alliance between our organizations. Based on customer feedback, one of the initial joint initiatives is the validation of Docker Enterprise Edition (which includes Docker Datacenter) against Cisco UCS and the Nexus infrastructures. We are excited to announce that Cisco Validated Designs (CVDs) for Cisco UCS and on Docker Enterprise Edition (EE) are immediately available.
CVDs represent the gold standard reference architecture methodology for enterprise customers looking to deploy an end-to-end solution. The CVDs follow defined processes and covers not only provisioning and configuration of the solution, but also test and document the solutions against performance, scale and availability/failure &; something that requires a lab setup with a significant amount of hardware that reflects actual production deployments. This enables our customers achieve faster, more reliable and predictable implementations.
The two new CVDs published for container management offers enterprises a well designed and an end-to-end lab tested configuration for Docker EE on Cisco UCS and Flexpod Datacenter. The collaborative engineering effort between Cisco, NetApp and Docker provides enterprises best of breed solutions for Docker Datacenter on Cisco Infrastructure and NetApp Enterprise Storage to run stateless or stateful containers.
The first CVD includes 2 configurations:

4-node rack servers Bare Metal deployment, co-locating Docker UCP Controller and DTR on 3 manager nodes in a Highly Available configuration and 1 UCP worker node.

10-node Blade servers Bare Metal deployment, with 3 nodes for UCP controllers, 3 nodes for DTR and remaining 4 nodes as UCP worker nodes

The second CVD was based on FlexPod Datacenter in collaboration with NetApp using Cisco UCS Blades and NetApp FAS and E-Series storage.
These CVDs leverage the Docker native user experience of Docker EE, along with Cisco’s UCS converged infrastructure capabilities to provide simple management control planes to orchestrate compute, network and storage provisioning for the application containers to run in a secure and scalable environment. It also uses built in security features of the UCS such as I/O isolation through VLANs, secure bootup of bare metal hosts, and physical storage access path isolation through Cisco VIC’s virtual network interfaces. The combination of UCS and Docker EE’s built-in security such as Secrets Management, Docker Content Trust, and Docker Security Scanning provides a secure end-to-end Container-as-a-Service (CaaS) solution.

Both these solutions use Cisco UCS Service Profiles to provision and configure the UCS servers and their I/O properties to automate the complete installation process. Docker commands and Ansible were used for Docker EE  installation. After configuring proper certificates across the DTR and UCP nodes, we were able to push and pull images successfully. Container images such as busybox, nginx, etc. and applications such as WordPress, Voting application, etc. to test and validate the configuration were pulled from Docker Hub, a central repository for Docker developers to store container images.
The scaling test included the deployment of containers and applications. We were able to deploy 700+ containers on single node and more than 7000 containers across 10 nodes without performance degradation. The scaling tests also covered dynamically adding/deleting nodes to ensure the cluster remains responsive during this change. This excellent scaling and resiliency tests on the clusters are result of swarm mode, container orchestration tightly integrated into Docker EE with Docker Datacenter, and Cisco’s Nexus switches which provides high performance and low latency network speed.
The fail-over tests covered node shutdown, reboot, induce fault at Cisco Fabric Interconnects to adapters on Cisco UCS blade servers. When the UCP manager node was shutdown/rebooted, we were able to validate that users were still able to access containers through Docker UCP UI or CLI. The system was able to start up quickly after a reboot and the UCP cluster and services were restored. Hardware failure resulted in cluster operating in reduced capacity, but there was no single point of failure.
As part of the FlexPod CVD, NFS was configured for Docker Trusted Registry (DTR) nodes for shared access. Flexpod is configured with NetApp enterprise class storage, and NetApp Docker Volume Plugin (nDVP) provides direct integration with Docker ecosystem for NetApp’s ONTAP, E-Series and SolidFire Storage. FlexPod uses NetApp ONTAP storage backend for DTR as well as Container Storage management, and can verify Container volumes deployed using NetApp OnCommand System Manager.
Please refer to CVDs for detailed configuration information.

FlexPod Datacenter with Docker Datacenter for Container Management
Cisco UCS Infrastructure with Docker Datacenter for Container Management

 

Docker Enterprise Edition now w/ @Cisco Validated Designs for Cisco UCS and Flexpod&;Click To Tweet

The post Docker and Cisco Launch Cisco Validated Designs for Cisco UCS and Flexpod Infrastructures on Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

SAP on the cloud: What to look for in a provider

I often meet with and advise CTOs and CIOs running big IT operations, many of whom use SAP enterprise resource planning (ERP) systems. Quite justifiably, they rely on them as the systems of record and engagement.
However, these decision makers are facing the reality of ERPs: the total cost of ownership of such systems, complex environments that may not be able to scale quickly, long lead times for getting new SAP applications running and a shortage of specialized skills.
They’re right to be wary. Old-school, on-premises ERPs can spark long debates among lines of business and corporate departments. Who should have first priority on time and resources? Who pays for what and when?  ERPs can even affect mergers and acquisitions because of the long time frames required to merge SAP environments and data.
Many IT leaders are responding to these challenges by moving their SAP applications to a managed cloud infrastructure. This is, of course, a big step, and to mitigate risks an IT leader should carefully look at providers before moving SAP ERPs to a managed cloud environment.
For example, a provider should give business leaders the ability to speed and simplify adoption of SAP S/4HANA software, along with other solutions powered by SAP HANA.  It should take a modular approach, with different levels of managed service according to specific needs. This allows leaders to focus on innovation and transformation while the vendor manages the operating system, databases and SAP applications.
Leaders should also determine whether an ERP partner can set up and run scalable SAP software landscapes on the cloud. This ability alone could potentially deliver cost savings of 20 to 25 percent over five years versus traditional self-management and operation of SAP software.
Other provider capabilities should include:

An IT Infrastructure Library (ITIL)-based management of the infrastructure, platform and databases
End-to-end management of the environment from governance through disaster recovery
A service portal for on-demand provisioning of SAP service requests based on virtualized resources
An SAP service catalog, including a wide range of predefined SAP managed services with more transparent pricing options designed for flexibility
Around-the-clock support from a highly trained, worldwide service staff
Flexible pricing options that help consolidate virtually all operating system and system management licenses and services into a single monthly charge

Already, forward-thinking enterprises are making the most of cloud-managed SAP services.
Specialty-materials company Celanese wanted to decrease annual SAP operational costs, adopt a more flexible model for delivery of the SAP platform and operations, and reduce the need for niche skillsets to manage IT infrastructure. Working with IBM, Celanese migrated its SAP data center to a hosted cloud solution with IBM. It reduced annual SAP operational cost by more than 50 percent, increased flexibility to address evolving business, and achieved equal or superior service levels for its managed infrastructure.
To keep its business running smoothly, Italy’s digital printer Pasqui S.r.l. needed a reliable hosting infrastructure for its critical SAP operations in accounting, logistics, sales and purchasing. With guidance from IBM and its business partner Beltos, Pasqui moved its SAP environment to a virtual server infrastructure. In addition to reducing its maintenance and management costs by 30 percent, Pasqui enhanced availability, boosted performance and improved the stability of its SAP environment.
After a divestiture, PeroxyChem had only 12 months to migrate its mission-critical business systems to a new platform or face enormous hosting fees. It didn’t want the cost or risk of buying hardware or setting up a large in-house IT department. Instead, PeroxyChem migrated its SAP environment to the IBM cloud infrastructure in just four-and-a-half months with no disruption to its operations. By having everything from its help desk to email managed by IBM, PeroxyChem was able to focus on its core competencies and reduce risk throughout its divestment process.
These companies and many like them are making the leap into the cloud and reaping significant cost savings, flexibility and competitive advantage. It’s one of the benefits of the digital revolution, and IT decision makers are making the most of it.
Learn more about how to manage SAP in the cloud.
The post SAP on the cloud: What to look for in a provider appeared first on news.
Quelle: Thoughts on Cloud

Let rdopkg manage your RPM package

rdopkg is a RPM packaging automation tool which was written to
efortlessly keep packages in sync with (fast moving) upstream.

rdopkg is a little opinionated, but when you setup your environment right,
most packaging tasks are reduced to a single rdopkg command:

Introduce/remove patches: rdopkg patch
Rebase patches on a new upstream version: rdopkg new-version

rdopkg builds upon the concept
distgit
which simply refers to maintaining RPM package source files in a git
repository. For example, all Fedora and CentOS packages are maintained in
distgit.

Using Version Control System for packaging is great, so rdopkg extends this
by requiring patches to be also maintained using git as opposed to storing
them as simple .patch files in distgit.

For this purpose, rdopkg introduces concept of
patches branch
which is simply a git branch containing… yeah, patches. Specifically,
patches branch contains upstream git tree with optional downstream patches
on top.

In other words, patches are maintained as git commits. The same way they are
managed upstream. To introduce new patch to a package, just git cherry-pick
it to patches branch and let rdopkg patch do the rest. Patch files are
generated from git, .spec file is changed automatically.

When new version is released upstream, rdopkg can rebase patches branch on
new version and update distgit automatically. Instead of hoping some .patch
files apply on ever changing tarball, git can be used to rebase the patches
which brings many advantages like automatically dropping patches already
included in new release and more.

Requirements

upstream repo requirements

You project needs to be maintained in a git repository and use
Semantic Versioning tags for its releases, such as
1.2.3 or v1.2.3.

distgit

Fedora packages already live in distgit repos which packagers can get by

fedpkg clone package

If your package doesn’t have a distgit yet, simply create a git repository
and put all the files from .src.rpm SOURCES in there.

el7 distgit branch is used in following example.

patches branch

Finally, you need a repository to hold your patches branches. This can be the
same repo as distgit or a different one. You can use various processes to
manage your patches branches, simplest one being packager maintaining them
manually like he would with .patch files.

el7-patches patches branch is used in following example.

install rdopkg

rdopkg page contains installation instructions. Most likely, this will do:

dnf copr enable jruzicka/rdopkg
dnf install rdopkg

Initial setup

Start with cloning distgit:

git clone $DISTGIT
cd $PACKAGE

Add patches remote which contains/is going to contain patches branches
(unless it’s the same as origin):

git remote add -f patches $PATCHES_BRANCH_GIT

While optional, it’s strongly recommended to also add upstream remote with
project upstream to allow easy initial patches branch setup, cherry-picking
and some extra rdopkg automagic detection:

git remote add -f upstream $UPSTREAM_GIT

Clean .spec

In this example we’ll assume we’ll building a package for EL 7 distribution
and will use el7 branch for our distgit:

git checkout el7

Clean the .spec file. Replace hardcoded version strings (especially
in URL) with macros so that .spec is current when Version changes. Check
rdopkg pkgenv to see what rdopkg thinks about your package:

editor foo.spec
rdopkg pkgenv
git commit -a

Prepare patches branch

By convention, rdopkg expects $BRANCH distgit branch to have appropriate
$BRANCH-patches patches branch.

Thus, for our el7 distgit, we need to create el7-patches branch.

First, see current Version::

rdopkg pkgenv | grep Version

Assume our package is at Version: 1.2.3.

upstream remote should contain associated 1.2.3 version tag which should
correspond to 1.2.3 release tarball so let’s use that as a base for our new
patches branch:

git checkout -b el7-patches 1.2.3

Finally, if you have some .patch files in your el7 distgit branch, you
need to apply them on top el7-patches now.

Some patches might be present in upstream remote (like backports) so you can
git cherry-pick them.

Once happy with your patches on top of 1.2.3, push your patches branch into
the patches remote:

git push patches el7-patches

Update distgit

With el7-patches patches branch in order, try updating your distgit:

git checkout el7
rdopkg patch

If this fails, you can try lower-level rdopkg update-patches which skips
certain magics but isn’t reccommended normal usage.

Once this succeeds, inspect newly created commit that updated the .spec
file and .patch files from el7-patches patches branch.

Ready to rdopkg

After this, you should be able to manage your package using rdopkg.

Please note that both rdopkg patch and rdopkg new-version will reset
local el7-patches to remote patches/el7-patches unless you supply
-l/–local-patches option.

To introduce/remove patches, simply modify remote el7-patches patches
branch and let rdopkg patch do the rest:

rdopkg patch

To update your package to new upstream version including patches rebase:

git fetch –all
rdopkg new-version

Finally, if you just want to fix your .spec file without touching
patches:

rdopkg fix
# edit .spec
rdopkg -c

More information

List all rdopkg actions with:

rdopkg -h

Most rdopkg actions have some handy options, see them with

rdopkg $ACTION -h

Read the
friendly manual:

man rdopkg

You can also read RDO packaging guide
which contains some examples of rdopkg usage in RDO.

Happy packaging!
Quelle: RDO