Amazon RDS for Oracle X1- und X1e-Instances sind jetzt in weiteren Regionen verfügbar

Ab heute sind die Instance-Klassen Amazon RDS for Oracle db.x1 und db.x1e in zusätzlichen Regionen verfügbar.

db.x1-Instances sind jetzt in den Regionen USA West (Nordkalifornien) und Asien-Pazifik (Hongkong) verfügbar.
db.x1-Instances sind jetzt in den Regionen USA Ost (Ohio), EU (Frankfurt), Asien-Pazifik (Seoul) und Asien-Pazifik (Singapur) verfügbar.

Quelle: aws.amazon.com

AWS Limit Monitor unterstützt nun die Überwachung vCPU-basierter On-Demand-Instance-Limits

AWS hat das Update AWS Limit Monitor aktualisiert – eine Lösung, die automatisch die Services bereitstellt, die erforderlich sind, um die proaktiv zu verfolgen und Benachrichtigungen zu senden, wenn Sie sich den Grenzwerten nähern. Die Lösung nutzt nun Service-Kontingente, um es Ihnen zu ermöglichen, die Servicenutzung anhand von Grenzwerten zu überwachen.  
Quelle: aws.amazon.com

Amazon Transcribe unterstützt nun AWS KMS-Verschlüsselung

Amazon Transcribe ist ein automatischer Spracherkennungsservice (ASR), mit dem Sie Ihre Anwendungen ganz einfach mit Sprach-zu-Text-Funktionen erweitern können. Bis dato verwendete Amazon Transcribe zur Verschlüsselung seiner Transkripte Amazon S3-SSE. Ab heute können Sie Ihre ganz eigenen Verschlüsselungsschlüssel vom AWS Key Management Service (KMS) nutzen, um Transkripte in Ihrem S3-Bucket zu verschlüsseln. Nutzer können so flexibler arbeiten und selbst festlegen, wie sie ihre Ausgangstranskripte speichern. Diejenigen, die die standardmäßige S3-SSE Verschlüsselungsmethode bevorzugen, können diese weiterhin nutzen.
Quelle: aws.amazon.com

SAP on Azure Architecture – Designing for performance and scalability

This is the second in a four-part blog series on designing a SAP on Azure Architecture. In the first part of our blog series we have covered the topic of designing for security. Robust SAP on Azure Architectures are built on the pillars of security, performance and scalability, availability and recoverability, and efficiency and operations. This blog will focus on designing for performance and scalability. 

Microsoft support in network and storage for SAP

Microsoft Azure is the eminent public cloud for running SAP applications. Mission critical SAP applications run reliably on Azure, which is a hyperscale, enterprise proven platform offering scale, agility, and cost savings for your SAP estate.

With the largest portfolio of SAP HANA certified IaaS cloud offerings customers can run their SAP HANA Production scale-up applications on certified virtual machines ranging from 192GB to 6TB of memory. Additionally, for SAP HANA scale-out applications such as BW on HANA and BW/4HANA, Azure supports virtual machines of 2TB memory and up to 16 nodes, for a total of up to 32TB. For customers that require extreme scale today, Azure offers bare-metal HANA large instances for SAP HANA scale-up to 20TB (24TB with TDIv5) and SAP HANA scale-out to 60TB (120TB with TDIv5).

Our customers such as CONA Services are running some of the largest SAP HANA workloads of any public cloud with a 28TB SAP HANA scale out implementation. 

Designing for performance

Performance is a key driver for digitizing business processes and accelerating digital transformation. Production SAP applications such as SAP ERP or S/4HANA need to be performant to maximize efficiency and ensure a positive end-user experience. As such, it is essential to perform a detailed sizing exercise on compute, storage and network for your SAP applications on Azure.

Designing compute for performance

In general, there are two ways to determine the proper size of SAP systems to be implemented in Azure, by using reference sizing or through the SAP Quick Sizer.

For existing on-premises systems, you should reference system configuration and resource utilization data. The system utilization information is collected by the SAP OS Collector and can be reported via SAP transaction OS07N as well as the EarlyWatch Alert. Similar information can be retrieved by leveraging any system performance and statistics gathering tools. For new systems, you should use SAP quick sizer.

Within the links below you can also attain the network and storage throughput per Azure Virtual Machines type:

Sizes for Windows Virtual Machines in Azure
Sizes for Linux Virtual Machines in Azure

Designing highly performant storage

In addition to selecting an appropriate database virtual machine based on the SAPS and memory requirements, it is important to ensure that the storage configuration is designed to meet the IOPS and throughput requirements of the SAP database. Be mindful, that the chosen virtual machine has the capability to drive IOPS and throughput requirements. Azure premium managed disks can be striped to aggregate IOPS and throughput values, for example 5 x P30 disks would offer 25K IOPS and 1000 MB/s throughput.

In the case of SAP HANA databases, we have published a storage configuration guideline covering production scenarios and also a cost-conscious non-production variant. Following our recommendation for production will ensure that the storage is configured to successfully pass all SAP HCMT KPIs, it is imperative to enable write accelerator on the disks associated with the /hana/log volume as this facilitates sub millisecond writes latency for 4KB and 16KB blocks sizes.

Ultra Disks is designed to deliver consistent performance and low latency for I/O-intensive workloads such as SAP HANA and any database (SQL, Oracle, etc.) With ultra disk you can reach maximum virtual machine I/O limits with a single Ultra DISKS, without having to stripe multiple disks as is required with premium disks.

At September 2019, Azure Ultra Disk Storage is generally available in East US 2, South East Asia, North Europe regions. and supported on DSv3 and ESv3 VM types. Refer to the FAQ for the latest on supported VM sizes for both Windows and Linux OS hosts. This video demonstrates the leading performance of Ultra Disk Storage.

Designing network for performance

As the Azure footprint grows, a single availability zone may span multiple physical data centers, which can result in network latency impacting your SAP application performance. A proximity placement group (PPG) is a logical grouping to ensure that Azure compute resources are physically located close to each and achieving the lowest possible network latency i.e. co-location of your SAP Application and Database VMs. For more information, refer to our detailed documentation for deploying your SAP application with PPGs.

We recommend you consider PPGs within your SAP deployment architecture and that you enable Accelerated Networking on your SAP Application and Database VMs. Accelerated Networking enables single root I/O virtualization (SR-IOV) to your virtual machine which improves networking performance, bypassing the host from the data-path. SAP application server to database server latency can be tested with ABAPMeter report /SSA/CAT.

ExpressRoute Global Reach allows you to link ExpressRoute circuits from on-premise to Azure in different regions together to make a private network between your on-premises networks. Global Reach can be used for your SAP HANA Large Instance deployment to enable direct access from on-premise to your HANA Large Instance units deployed in different regions. Additionally, GlobalReach can enable direct communication between your HANA Large Instance units deployed in different regions

Designing for scalability

With Azure Mv2 VMs, you can scale up to 208 vCPUs/6TB now and 12 TB shortly. For databases that require more than 12 TB, we offer SAP HANA Large Instances (HLI), purpose-built bare metal offering that are dedicated to you. The server hardware is embedded in larger stamps that contains HANA TDI certified compute, network and storage infrastructure, in various sizes from 36 Intel CPU cores/768 GB of memory up to a maximum size of 480 s CPU cores and 24 TB of memory.

Azure global regions at HyperScale

Azure has more global regions than any other cloud provider, offering the scale needed to bring applications closer to users around the world, preserving data residency, and offering comprehensive compliance and resiliency options for customers. As of Q3 2019, Azure spans a total of 54 regions and is available in 140 countries.

Customers like the Carlsberg Group, transformed IT into a platform for innovation through a migration to Azure centered on its essential SAP applications. The Carlsberg migration to Azure encompassed 700 servers and 350 applications—including the essential SAP applications—involving 1.6 petabytes of data, including 8 terabytes for the main SAP database.

Within this blog we have touched upon several topics relating to designing highly performant and scalable architectures for SAP on Azure.
As customers embark on their SAP to Azure journey, in order to methodically deploy highly performant, and scalable architectures, during various phases of the deployment, it is recommended to deep dive into , the SAP on Azure documentation to deepen their understanding of using Azure for hosting and running their SAP applications. The SAP workload on Azure planning and deployment checklist can be used as a compass to navigate through the various phases of a customer’s SAP Greenfield deployment or on-premises to Azure migration project.

In blog #3 in our series we will cover Designing for Availability and Recoverability.
Quelle: Azure

At the Grace Hopper Celebration, Learn Why Developers Love Docker

Lisa Dethmers-Pope and Amn Rahman at Docker also contributed to this blog post.
Docker hosted a Women’s Summit at DockerCon 2019.
As a Technical Recruiter at Docker, I am excited to be a part of Grace Hopper Celebration. It is a marvelous opportunity to speak with many talented women in tech and to continue pursuing one of Docker’s most valued ambitions: further diversifying our team. The Docker team will be on the show floor at the Grace Hopper Celebration, the world’s largest gathering of women technologists the week of October 1st in Orlando, Florida.
Our Vice President of Human Resources, and our Senior Director of Product Management, along with representatives from our Talent Acquisition and Engineering teams will be there to connect with attendees. We will be showing how to easily build, run, and share an applications using the Docker platform, and talking about what it’s like to work in tech today. 
Supporting Women in Tech
While we’ve made strides in diversity within tech, the 2019 Stack Overflow Developer Survey shows we have work to do. According to the survey, only 7.5 percent of professional developers are women worldwide (it’s 11 percent of all developers in the U.S.).
That’s why Docker hosts Women in Tech events at our own conferences, and we’re pleased to participate in the  Grace Hopper Celebration this year. It’s a place for women technologists to learn, network, and connect with a like-minded community. The conference offers attendees several opportunities to advance their professional development, find and provide mentorship, and further develop their leadership skills.
Last year’s celebration hosted over 20,000 attendees from 78 countries as well as thousands of listeners over livestream. We are thrilled to be involved with the conference and show our support for an organization making such a powerful impact.
Creating and Fostering Connections
2 million developers already use Docker regularly today. We have over 240 regional user groups, and a presence in 80 countries. Diversity and inclusion are a key part of our community, and we’ll continue building on that as we grow.
We are seeking forward-thinking individuals to join our team who have diverse experiences and are passionate about bringing technology that transforms lives, industries, and the world to life.
Whether you’re a curious explorer, a Docker newbie, or a super-powered Docker ninja, you should come join us at the Docker booth to learn more about how you can get the most benefit out of the platform!
If you’re a data scientist, a developer, or just bouncing from one coding assignment to the next, come and learn how you can start using Docker almost immediately! Apart from being introduced to cool Docker lingo, you’ll learn how to quickly launch a Docker environment, spin up an app on your machine, and share it with the rest of the world via Docker Hub.
We look forward to collaborating and connecting with you. Come visit us at the technology showcase booth 359, 3648!
In the meantime, if you’d like to dive into diversity in tech, these three DockerCon sessions are a great starting point:

A Transformation of Attitude: Why Mentor Matter
Diversity is not Only about Ethnicity and Gender
How Intentional Diversity Creates Thought Leadership

At @ghc, learn why #developers love Docker and why #diversity in the developer community matters to usClick To Tweet

The post At the Grace Hopper Celebration, Learn Why Developers Love Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Azure Storage Explorer 1.10.0

This month we released a new version of Azure Storage Explorer, 1.10.0. This latest version of Storage Explorer introduces several exciting new features and delivers significant updates to existing functionality. These features and changes are all designed to make users more efficient and productive when working with Azure Storage, CosmosDB, ADLS Gen2, and, starting with 1.10.0, managed disks. If you’ve never used Storage Explorer before, you can download it for Windows, macOS, or Linux on the product page here.

Storage Explorer adds support for managed disks

One of the most challenging parts of migrating on-premises virtual machines (VMs) to Azure is moving the data for these VMs into Azure. Storage Explorer 1.10.0 makes this process much easier by adding support for managed disks. The new features we’ve added for managed disks lets you create and manage VM disks using the easy to use Storage Explorer GUI. Using Storage Explorer also gives you an incredibly performant workflow. When you upload a VHD to a Managed Disk, Storage Explorer is leveraging the power and speed of AzCopy v10 to quickly get your data into Azure. Storage Explorer’s support for managed disks also includes the ability to create snapshots of, copy, download, and delete your managed disks. You can learn more about the latest disk support capabilities on our recent blog.

Storage Explorer introduces new user settings

Ever since Storage Explorer was first released, users have asked for a variety of settings that would allow them to configure how Storage Explorer behaves. As more settings have been added though, managing and discovering these settings has proved increasingly difficult. To help alleviate those problems, we are excited to introduce a centralized settings user interface (UI.) From this UI, you can configure many of Storage Explorer’s existing setting, such as proxy and application theme. We’ve also added settings which allow you to logout on exit and to toggle the refresh mode of the data explorers.

We have a long list of user requested settings in our backlog which will make their way to the settings UI in future updates. And if you have a suggestion for a setting you’d like to see, feel free to let us know by opening an issue at our GitHub repo.

Storage Explorer now available on the Snap Store

The last major change we’d like to highlight for 1.10.0 is the addition of Storage Explorer to the Canonical Snap Store. Installing Storage Explorer on Linux has always been a challenge for users, but when you install from the Snap Store things become as easy as installing on any other platform. The Snap platform will install all dependencies for you, and help you keep Storage Explorer up to date and secure. If you’d like to install Storage Explorer from the Snap Store, you can find it listed on the store.

Looking forward

Over the coming months, we have plans to add even more new features and capabilities to Storage Explorer. In the near future, we will be making AzCopy the default transfer engine for all Blob transfers, and we’ll start work on using AzCopy for File Shares. We’ve also been hard at work localizing Storage Explorer into additional languages so more people all over the world can effectively use the product. We’re going to improve on and bring additional features to ADLS Gen 2, including enhanced ACL management and increased parity with Blob features. And of course, we’ll be looking at GitHub for any user requests for new features, so if there’s something you would like to see then we highly encourage you to to open an issue.

Install Storage Explorer now

Download Storage Explorer 1.10.0 today to take advantage of all of these new features. If you have any feedback, please make sure to open a new issue on our GitHub repo. If you are experiencing difficulties using the product, please open a support ticket following these instructions.
Quelle: Azure

Stay on top of best practices with Azure Advisor alerts

To get the most out of your Azure investment and run as efficiently as possible, we recommend that you regularly review and optimize your resources for high availability, security, performance, and cost. That’s why we created Azure Advisor, a free Azure service that helps you quickly and easily optimize your Azure resources with personalized recommendations based on your usage and configurations.

But with so many priorities vying for your attention, it can be easy to miss remediating your Advisor recommendations. So, what’s a good way to stay on top of these critical optimizations that can save you money, boost performance, strengthen your security posture, and increase uptime?

Get notified about new recommendations with Advisor alerts

Advisor now offers user-configurable alerts so you can get automatically notified as soon as your best practice recommendations become available. Advisor alerts will allow you to act more quickly and efficiently to optimize your Azure resources and stay on top of your new recommendations.

You can configure these alerts to be triggered based on several factors:

Recommendation category – high availability, performance, or cost.
Business impact – high, medium, or low.
Recommendation type – for example, right-size or shutdown underutilized virtual machines (VMs,) enable VM backup, or use availability sets to improve fault tolerance.

You can also choose from a wide range of notification options, including email, SMS, push notification, webhook, IT service management integration with popular tools like ServiceNow, Automation runbooks, and more. Your notification preferences are configured using action groups, so you can repurpose any action groups you’ve already set up, such as those for your custom Azure Monitor alerts or Azure Service Health alerts.

Best practices for your Advisor alerts

As you get started with Advisor alerts, we have three tips for you.

First, start simple by choosing a few high impact recommendations that are important to your organization, based on your business goals and priorities. For example, you might have a leadership mandate to reduce costs by a certain percentage, in which case you might decide that “Right-size or shutdown underutilized VMs” is a critical recommendation for you. Then create an alert for that set of recommendations. You can always change your alert or add more later.
 

Second, consider who is right person to notify about new recommendations and the best way to notify them. It’s best to notify the individual or team who has the permission and authority to remediate the recommendation, to streamline the process. In keeping with the “start simple” principle, you may wish to begin with email notifications, which are the most basic to configure and the least intrusive to receive. Again, you can always modify your preferences later.

Finally, once you’ve tackled the first two tips and are comfortable with Advisor alerts, start to explore automation scenarios. For example, you can automatically route a new best practice recommendation through your ticketing system and assign it to the right team for remediation. In some cases, you can even use a combination of Advisor alerts and Automation runbooks to automatically remediate the recommendation.

Get started with Advisor alerts

Visit Advisor in the Azure portal to review your recommendations and start setting up your Advisor alerts. For more in-depth guidance, visit the Advisor documentation. Let us know if you have a suggestion for Advisor by submitting an idea in our forums here.
Quelle: Azure

Cloud Build brings advanced CI/CD capabilities to GitHub

If you use continuous integration (CI) or continuous delivery (CD) as part of your development environment, being able to configure and trigger builds based on different repo events is essential to creating git-based advanced CI/CD workflows and multi-environment rollouts. Customizing which builds to run on changes to branches, tags, and pull requests can speed up development, notify teammates when changes are ready to be merged, and deploy merged changes to different environments.Today, millions of developers collaborate on GitHub. To help make these developers more productive, we are excited to launch enhanced features to the Cloud Build GitHub App. Here is a list of the advanced capabilities you gain with Cloud Build’s new features. Trigger builds on specific pull request, branch, and tag eventsWhen integrating with GitHub via the app, you can now create build triggers to customize which builds to run on specific repo events. For example, you can set up build triggers to fire only on pull requests (PRs), pushes to master, or release tags. You can further specify different build configs to use for each trigger, letting you customize which build steps to run depending on the branch, tag, or PR the change was made to.You can further customize build triggers by configuring them to run, or not run, based on which files have changed. This lets you, for example, ignore a change to a README file, or only trigger a build when a file in a particular subdirectory has changed (as in a monorepo setup). Lastly, for PRs, an optional feature lets you require a comment on the PR to trigger a build, such that only repo owners and collaborators, and not external contributors, can invoke a build.If you already use the build trigger feature within Cloud Build, many of these options will look familiar. With this update, we are extending build triggers to support new capabilities, such as GitHub PR events, to developers who use GitHub and want more granular control to create advanced CI/CD pipelines with Cloud Build.View build status in GitHubIntegrating CI feedback into your developer tools is critical to maintaining your development flow. Builds triggered via the GitHub App automatically post status back to GitHub via the GitHub Checks API. The feedback is integrated directly into the GitHub developer workflow, reducing context switching. Updates posted to GitHub include build status, build duration, error messages, and a link to detailed build logs. With GitHub protected branches you can now easily use Cloud Build to gate merges on build status and re-run builds directly from the GitHub UI.Create and manage triggers programmaticallyAs the number of build triggers in your environment grows, creating and updating triggers from the UI can become time-consuming and hard to manage. With the Cloud Build GitHub App update, you can now configure build triggers via the Cloud Build API or Cloud SDK. Either inline in the API request or via a json or yaml file, you can programmatically create, update, and delete GitHub triggers to more easily manage build triggers across a large team or when automating the CI/CD setup for new repos.Create a local trigger.yaml file:Import the trigger via the CLI:With this integration between Cloud Build and GitHub, you now have an easy way to validate your pull requests early and often and set up more advanced git-based CI/CD workflows. The ability to create triggers in Google Cloud Console or programmatically via config files makes it easy to get started and automate your end-to-end developer workflows. To learn more, check out the documentation, or try this Codelab.
Quelle: Google Cloud Platform