Amazon SageMaker Ground Truth erweitert die Unterstützung für die automatisierte Datenbeschriftung auf den Workflow für semantische Segmentierungsbeschriftung und ist ab sofort in sechs weiteren AWS-Regionen verfügbar.

Amazon SageMaker Ground Truth unterstützt nun die automatisierte Datenbeschriftung für den integrierten Bildbeschriftungsworkflow zur semantischen Segmentierung. Durch die Erweiterung um die Regionen Asien-Pazifik (Mumbai), Asien-Pazifik (Seoul), Asien-Pazifik (Singapur), Kanada (Zentral), EU (Frankfurt) und EU (London) ist SageMaker Ground Truth in 12 AWS-Regionen verfügbar. 
Quelle: aws.amazon.com

Alexa for Business bietet Geschäftskunden jetzt die Möglichkeit zu steuern, in welcher Weise ihre Daten die Dienste von Amazon verbessern

Mit Alexa for Business können Kunden nun auf Raumprofilebene festlegen, ob Sprachaufzeichnungen von Geräten, die sie verwalten, manuell geprüft und zur Verbesserung unserer Dienste verwendet werden. Mit einer neuen Einstellung, der Datennutzungsrichtlinie, die Sie im Raumprofil finden, können Kunden die manuelle Überprüfung von Sprachaufzeichnungen, die zur Verbesserung der Dienste von Amazon verwendet werden, durch Alexa for Business entweder zulassen (Standard) oder verbieten. Standardmäßig wird nur ein kleiner Teil dieser Aufzeichnungen zur Verbesserung dieser Algorithmen manuell ausgewertet. In unserer Dokumentation finden Sie weitere Informationen über diese Funktion und die Erstellung und Verwaltung von Raumprofilen in Alexa for Business.
Quelle: aws.amazon.com

Learn About Modern Apps with Docker at VMworld 2019

The Docker team will be on the show floor at VMworld the week of August 25. We’ll be talking about the state of modern application development, how to accelerate innovation efforts, and the role containerization and Docker play in powering these initiatives. 

Come by booth #1969 at VMworld to check out the latest developments in the Docker platform and learn why over 1.8 million developers build modern applications on Docker, and why over 800 enterprises rely on Docker Enterprise for production workloads. 

At VMworld, we’ll be talking about:

What’s New in Docker Enterprise 3.0

Docker Enterprise 3.0 shipped recently, making it the first and only desktop-to-cloud container platform in the market that lets you build and share any application and securely run them anywhere – from hybrid cloud to the edge. At VMworld, we’ll have demos that shows how Docker Enterprise 3.0 simplifies Kubernetes with the Docker Kubernetes Service (DKS) and enables companies to more easily build modern applications with Docker Desktop Enterprise and Docker Application.

Accelerating Your Journey to the Cloud

Everyone is talking about moving workloads to the cloud to drive efficiencies and simplify ops, but many existing applications that power enterprises still run in corporate datacenters. Cloud migration for existing apps is often thought to be a resource-intensive, arduous process, but that doesn’t have to be the case.

At VMworld, we’ll be demonstrating how we work with customers to provide an easy path to the cloud by identifying applications that are best-suited for containerization and automating the migration process. Applications that are modernized and containerized with Docker Enterprise are portable across public, private and hybrid clouds, so you don’t get locked-in to one provider or platform.  

Unifying the Dev to Ops Experience

There’s no question that modern, distributed applications are becoming more complex.  You need a seamless and repeatable way to build, share and run all of your company’s applications efficiently. A unified end-to-end platform addresses these challenges by improving collaboration, providing greater control and ensuring security across the entire application lifecycle.

With Docker Enterprise, your developers can easily build containerized applications without disrupting their existing workflows and IT ops can ship these new services faster with the confidence of knowing security has been baked in from the start – all under one unified platform. Talk to a Docker expert at VMworld about how Docker Enterprise provides the developer tooling, security and governance, and ease of deployment needed for a seamless dev to ops workflow. 

We hope to see you at the show! 

Learn how Docker helps you build modern apps and modernize your existing apps at #VMworld2019Click To Tweet

Want a preview? Watch the Docker Enterprise 3.0 demo.

Dive deeper with the following resources: 

See what’s new in Docker Enterprise 3.0 Get started with a free trial of Docker Enterprise 3.0Watch the webinar series: Drive High-Velocity Innovation with Docker Enterprise 3.0

The post Learn About Modern Apps with Docker at VMworld 2019 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Disaster Recovery Strategies for Red Hat OpenShift

This is a guest post written by Gou Rao, CTO of Portworx As increasingly complex applications move to the Red Hat OpenShift platform, IT teams should have disaster recovery (DR) processes in place for business continuity in the face of widespread outages. These are not theoretical concerns. Many industries are subject to regulations that require […]
The post Disaster Recovery Strategies for Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Shining a light on your costs: New billing features from Google Cloud

One of our primary goals at Google Cloud Platform (GCP) is to help you focus on building value for your business while we take care of the infrastructure. But as you focus on those value-adding activities, it’s still important to understand the costs of what you’re building so you can optimize them. One of the biggest steps you can take is to shine a light on those costs through good cost hygiene and awareness. Over the last few months, we’ve added a number of features to help you adopt cost management best practices.You can see some of the new features we’ve added to Billing reports in this video:You can also get a more comprehensive look into what we’ve released below along with recommendations for how to apply good cost hygiene and awareness within your organization.Create cost accountability through visibilityThe biggest step you can take is to simply make your costs visible. We offer several features in GCP that make this easy to do: our new Billing account overview page, which gives you an at-a-glance summary of your charges to date, estimated end-of-month charges, and any credit balances; Billing reports, which are dynamic, built-in cost reports available in the Google Cloud console; and Billing export, which exports detailed cost data to a BigQuery dataset of your choice for further analysis. You can set permissions on both Billing reports and export to make sure people in your organization have access to the right cost views, and you can use exported data to run custom queries and dashboards (such as via Data Studio) to dive deeper into your cloud usage and costs.These tools do more than create a culture of cost ownership. In the cloud, where performance bugs can often manifest as cost anomalies, improving cost visibility and accountability can go a long way to improving general operations. Customers tell us that using these tools helps them make better technology decisions and improves their cost efficiency:“Giving developers access to view the costs for their GCP projects helps them be more aligned with the business and its objectives,” says Dale Birtch, site reliability engineer at Vendasta. “Not only do we get better engineering practices, we get a more stable environment long-term, and it costs us less to run.”Organize costs the way you manage workThe next big step is to organize costs the way you manage work. There are three main ways to organize your GCP costs: by product hierarchy, by project hierarchy, and by label. The default way to organize costs is by product hierarchy. Anytime you use a GCP resource, whether it is a Compute Engine VM or a Cloud Pub/Sub message, its costs are reported by stock-keeping unit (SKU). We organize these SKUs under GCP products, helping you get a higher-level view of your usage. For example, N1 standard instance SKUs roll up to the Compute Engine product. As part of GCE’s resource-based pricing launch, we added more metadata to your Billing export BigQuery datasets, so you can see the usage location (if applicable) and, for Compute Engine VMs, the machine specification, core and memory footprint of your instances via system labels. We have also added usage location to Billing reports. This helps you quickly see which geography, region or multi-region is driving your costs.The next way to organize costs is by the project hierarchy. Almost every resource is contained by a project, meaning that most resource access and costs can be managed by project. Organizations and folders give structure to your GCP environments and help you organize your projects and standardize permissions and policies across groups of projects. For example, a folder might represent the “production” environment with strict access requirements set at the folder level. You can also now see a project’s folder path in Billing export to BigQuery to make it easy to query and create custom dashboards for folder costs.The final way to organize costs is by label. In particular, resource labels can be applied to resources to help you distinguish between usage within or across projects. For example, if a project contains all of the resources for an application, one set of labels could represent resources for components of that application (i.e., front end, back end), and another set of labels could represent common costs across applications (i.e., security, testing, development). With these new features, labels are available in Billing export to BigQuery, and we plan to add the ability to filter and group costs by labels in Billing reports soon.If you are a Google Kubernetes Engine (GKE) user, we also recently launched GKE usage metering. GKE usage metering exports usage breakdowns by Kubernetes namespace and Kubernetes labels to BigQuery in a way that can be joined with Billing export data to estimate cost breakdowns for teams and workloads that are sharing a cluster.Understand your net costsAnother big step you can take toward cost visibility is to understand your fully loaded or net costs. We’ve made two improvements to make your net costs easier to understand.First, Billing reports now let you see costs by invoice month, including taxes and other invoice-level charges. We also added the ability to filter usage-based credits by type so that you can understand what your costs would be without free trial, promotional, or other usage-based credits, such as sustained use discounts and committed use discountsSecond, we’ve added a new cost breakdown report that lets you see how we arrive at your final invoice amount from your original usage costs. For example, if you use Compute Engine, the cost breakdown chart will show you how much your VM costs would have been before any committed use discounts, sustained use discounts, promotional credits and/or free trial credits, and visualize the net effect of those discounts. This gives you a simple at-a-glance overview of your GCP costs and savings, as shown here:Plans might change, but planning is essentialFinally, an important part of cost management is to understand what your costs should be (your plan) and when costs have deviated from your plan. Last year, we added a cost forecast feature to Billing reports to show you a smart forecast based on your cost history and selected filters. We’ve improved our forecasting algorithm so that it captures monthly cyclicality. Now, instead of a simple linear projection, you can see a forecast that matches how cycles within a month affect your costs, like this:In addition, with the higher accuracy forecast, we have also launched the ability to set alerts based on forecasted cost. So rather than relying simply on alerts when you exceed a budget threshold (e.g., $1,000 per month for a specific project), you can now also set an alert—via email or Cloud Pub/Sub—so that you’re notified when you are forecasted to exceed that budget for the month.Turn on the electric lightCloud customers often tell us that they feel left in the dark about the cost of what their developers are building. Developer velocity has never been faster, but administrators and managers are simultaneously struggling to govern their costs. A quote by Justice Louis Brandeis comes to mind: “Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.” Every corner of your cloud should be well-lit so you can easily understand your cloud costs. We’re committed to delivering cost management tools that help you illuminate your business and grow confidently in the cloud.To learn more about what’s next for Google Cloud cost management, check out the following:Videos: Recent Next ‘19 session recordings and best practices webinarsHands-on lab: Understanding and analyzing your costs with Billing reportsWhitepaper: Guide to financial governance in the cloudGuide: Billing resource organization & access managementRelease notes: Billing & cost managementContact us anytime with your feedback or questions.
Quelle: Google Cloud Platform

KeyBank chooses Anthos to develop personalized banking solutions for its customers

Editor’s note: KeyBank, the primary subsidiary of KeyCorp in Cleveland, OH, is a superregional bank with $137 billion under management. Last year, KeyBank joined the Anthos early access program, as they looked at how to extend the benefits of containers and Kubernetes to legacy applications. Here’s why.When you’ve been around as long as KeyBank has – nearly 200 years – you know a thing or two about keeping up with the pace of change. What started out as the Commercial Bank of Albany, New York, is today the 13th largest bank in the United States, with about 1,000 branches spanning from Alaska to Maine. And as we grow, transforming ourselves through digitization is a central way that we connect with and serve our clients.Digitization allows us to spend less time servicing “fast” money—individual transactions such as deposits, withdrawals, transfers—and more time on “slow” money—personalized banking solutions for goals like starting a business, retirement or paying for college. Because we provide a robust online and mobile experience for fast money, when a client comes into a branch, we can have deeper conversations with them about how slow money can help with their financial wellness.As part of these efforts to digitize the enterprise, we’ve been using containers and Kubernetes in our on-prem data center for several years, mainly for our customer-facing online banking applications. The speed of innovation and competitive advantage of a container-based approach is unlike any technology we’ve used before. In 2015, we acquired First Niagara Bank, and had to onboard its 500,000 online retail bank clients. During the conversion weekend, our new clients had trouble navigating the new website and logging in. Over the course of two days, our web team deployed as many as 10 fixes to production in real-time, without having to take down the system a single time. That never could have happened without the portability and automation of containers and Kubernetes.  Containers and Kubernetes have also helped us spin infrastructure up and down on demand. That’s significant when you have big spikes in demand, like over major shopping days such as Black Friday and Cyber Monday. For example, our online banking application averages about 35 logins per second, but that can surge to 100 logins per second at peak periods. Think about that—we can quickly triple our capacity without having to do anything, because our Kubernetes infrastructure automatically spins up capacity and spins it back down again when it’s not needed. We wanted to bring that kind of “burstability” to the rest of our infrastructure—applications that weren’t developed to run in containers. In addition to the online banking applications running in Kubernetes, we support over 300 other internal applications that are critical to our day-to-day operations—including fraud detection and corporate account management—and 95% of those run on VMware. We also wanted to stay as close as possible to the open-source version of Kubernetes. Anthos, Google Cloud’s hybrid and multi-cloud platform, checked all those boxes. It gives us the ability to spin up infrastructure whenever—and wherever—we need, so we can provide that same burstability to internal processes that weren’t created to natively run in containers. Google created Kubernetes, so we know that much of the Anthos feature set comes straight from the source. We deploy Anthos locally on our familiar and high-performance Cisco HyperFlex hyperconverged infrastructure of computing, networking and storage resources. We manage the containerized workloads as if they’re all running in GCP, from the single source of truth, our GCP console. We’ve been partnering closely with Google Cloud and Cisco on Anthos since day one and are taking a methodical approach to deciding which applications to migrate there. First on our list: applications that change all the time, so that they can benefit from earlier and more frequent testing that containerization brings to the table. By verifying the technical wellbeing of our applications, we’re helping to ensure our clients are on a path to financial wellness.
Quelle: Google Cloud Platform