Amazon AppStream 2.0 now supports Fleet Auto Scaling to help you optimize your streaming costs
Amazon AppStream 2.0 now supports Fleet Auto Scaling to help you optimize your streaming costs.
Quelle: aws.amazon.com
Amazon AppStream 2.0 now supports Fleet Auto Scaling to help you optimize your streaming costs.
Quelle: aws.amazon.com
Today, we made it easier for you to understand the permissions your AWS Identity and Access Management (IAM) policies grant with policy summaries in the IAM console. Instead of reading JSON policy documents, you can scan a table that summarizes the services, actions, resources, and conditions defined in each policy. This summary enables you to quickly understand the permissions defined in each IAM policy. You can find this summary on the policy detail page or the Permissions tab on an individual IAM user’s page. To learn more about policy summaries, see the blog post, Move Over JSON – Policy Summaries Make Understanding IAM Policies Easier, and the IAM documentation, Understanding Policy Summaries in the AWS Management Console.
Quelle: aws.amazon.com
Over the past couple of weeks, we have announced multiple enhancements for backup and recovery of both Windows and Linux Azure Virtual Machines that reinforce Azure Backup’s cloud-first approach of backing up critical enterprise data in Azure. Enterprise production environments in Azure are becoming increasingly dynamic and are characterized by frequent VM configuration changes (such as network or platform related updates) that can adversely impact backup. Today, we are taking a step to enable customers to monitor the impact of configuration changes and take steps to ensure the continuity of successful backup operations. We are excited to announce the preview of Backup Pre-Checks for Azure Virtual Machines.
Backup Pre-Checks, as the name suggests, check your VMs’ configuration for issues that can adversely affect backups, aggregate this information so you can view it directly from the Recovery Services Vault dashboard and provide recommendations for corrective measures to ensure successful file-consistent or application-consistent backups, wherever applicable. All this without any infrastructure and at no additional cost.
Backup Pre-Checks run as part of the scheduled backup operations for your Azure VMs and complete with one of the following states:
Passed: This state indicates that your VMs configuration is conducive for successful backups and no corrective action needs to be taken.
Warning: This state indicates one or more issues in VM’s configuration that might lead to backup failures and provides recommended steps to ensure successful backups. Not having the latest VM Agent installed, for example, can cause backups to fail intermittently and falls in this class of issues.
Critical: This state indicates one or more critical issues in the VM’s configuration that will lead to backup failures and provides required steps to ensure successful backups. A network issue caused due to an update to the NSG rules of a VM, for example, will fail backups as it prevents the VM from communicating with the Azure Backup service and falls in this class of issues.
Value proposition
Identify and Monitor VM configuration issues at scale: With the aggregated view of the Backup Pre-Check status across all VMs on the Recovery Services Vault, you can keep track of how many VMs need corrective configuration changes to ensure successful backups
Resolve configuration issues more efficiently: Use the Backup pre-check states to rank which VMs need configuration changes. Address the “Critical” Backup Pre-Check status for your VMs first, using the specific required steps and ensure their successful backups before addressing the “Warning” Backup Pre-check states for your VMs.
Automated execution: You don’t need to maintain or apply separate schedules for Backup Pre-Checks as they are integrated with existing backup schedules and therefore are assured to execute automatically and get the latest VM configuration information at the same cadence as their backups.
Getting started
Follow the steps below to start resolving any issues reported by Backup Pre-Checks for Virtual Machine backups on your Recovery Services Vault.
Click on the ‘Backup Pre-Check Status (Azure VMs)’ tile on the Recovery Services Vault dashboard.
Click on any VM with Backup Pre-Check status of either Critical or Warning. This would open the VM details blade.
Click on the blade notification on the top of the blade to reveal the configuration issue description and remedial steps.
Related links and additional content
Learn more about preparing your VMs for successful backups
Need help? Reach out to Azure Backup forum for support
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones
Follow us on Twitter @AzureBackup for the latest news and updates
New to Azure Backup, sign up for a free Azure trial subscription
Quelle: Azure
Over the course of the past few weeks, I have been gathering feedback around Image Streams. This feature can cause a lot of misunderstanding and confusion, even for long-time users. As a maintainer of this feature, I felt obligated to explain in detail what Image Streams are, and how regular users can benefit from using them. Hopefully, this article will help you to understand Image Streams better.
Quelle: OpenShift
You can now get an Amazon SNS notification when new Amazon ECS-optimized AMI releases are available.
Quelle: aws.amazon.com
We are excited to announce that now you can have greater control over your web API’s when you secure them using Azure AD B2C. Today, we are enabling the public preview for using access tokens with your web API’s. This is a powerful feature that many of you have been asking for. The introduction of this feature makes it possible to create web API’s that can be accessed by different client applications. You can even grant permissions to your API on an app-to-app basis. By having more control over who can access your API, you will be able to develop apps with tighter security. Getting started Create the Web API Go to the Azure AD B2C Settings blade in your Azure AD B2C tenant and add a new application. Give your application a name, set ‘Include web app / web API’ to ‘YES’, and enter a ‘Reply URL’ and an ‘App ID URI’. After creating your web API, click on the application, and then ‘Published scopes’. In this blade, you can add the scopes, or permissions, that a client application can request. The ‘user_impersonation’ permission is available by default. Create the client application Inside the ‘Applications’ blade, register a new application. After creating it, select ‘Api access’. Click the ‘Add’ button. In the next blade, select the API and its permissions you would like to grant your client application. By default, applications are granted the ability to access the user’s profile via the “openid” permission, and generate refresh tokens via the “offline_access” permission. These can be removed if you do not want your client application to have this functionality. Acquiring an Access Token Making a request to Azure AD B2C for an access token is similar to the way requests are made for id tokens. The main difference is the value entered in the “scope” parameter. The “scope” parameter contains the specific resource and its permissions your app is requesting. For example, to access the “read” permission for the resource application with an App ID URI of “https://B2CBlog.onmicrosoft.com/notes”, the scope in your request would be “https://B2CBlog.onmicrosoft.com/notes/read”. Below is an example of an authorization code request with the following scopes: “https://B2CBlog.onmicrosoft.com/notes/read”, “openid”, and “offline_access”. https://login.microsoftonline.com/B2CBlog.onmicrosoft.com/oauth2/v2.0/authorize? p=<yourPolicyId>&client_id=<appID_of_your_client_application>&nonce=anyRandomValue &redirect_uri=<redirect_uri_of_your_client_application>&response_type=code &scope=https%3A%2F%2FB2CBlog.onmicrosoft.com%2Fnotes%2Fread+openid+offline_access If you would like to learn more about this feature or try it out using our samples, please check out our documentation. Keep your great feedback coming on UserVoice and Twitter (@azuread). If you have questions, get help using Stack Overflow (use the ‘azure-ad-b2c’ tag).
Quelle: Azure
The Amazon WorkDocs Administrative SDK provides API-based administrator level access to WorkDocs site resources. With the Administrative SDK, software vendors and IT organizations can set up their applications to work with WorkDocs. As a result, WorkDocs now allows you to use third party applications for content management, content migration, virus-scanning, data loss prevention (DLP), and eDiscovery.
Quelle: aws.amazon.com
Event, Emotionen, Coolness – die CeBIT will sich mal wieder neu erfinden. Auch Privatpublikum soll wieder willkommen sein. Firmen sehen darin eine Chance, die Digitalisierung in der Gesellschaft zu verankern. Aber es gibt auch Zweifel.
Quelle: Heise Tech News

Brendan Mcdermid / Reuters
The Senate voted Thursday to make it easier for internet service providers to share sensitive information about their customers, a first step in overturning landmark privacy rules that consumer advocates and Democratic lawmakers view as crucial protections in the digital age. The vote was passed along party lines, 50-48, with all but two Republicans voting in favor of the repeal and every Democrat voting against it. Two Republican Senators did not vote.
Passed by the Federal Communications Commission in the final months of the Obama presidency, the privacy rules prohibited internet providers like Comcast and Verizon from selling customer information, including browsing history and location data, without first getting consent. The rules also compelled providers to tell customers about the data they collect, the purpose of that data collection, and to identify the types of third party companies that might be given access to that information.
But the telecom industry and Republicans in Congress fiercely opposed the new regulations. Critics argued that these rules unfairly target internet providers, restricting their ability to turn personal information into targeted advertising and other tailored services, even as giant web companies like Google and Facebook are free to collect and sell our information without those limitations.
Last month, Ajit Pai, the new Trump-appointed Chair of the FCC moved to block a piece of the privacy rules that required internet providers to adopt reasonable security measures and notify customers when data breaches occur. Along with Maureen Ohlhausen, the acting chair of the Federal Trade Commision who was also appointed by President Trump, Pai believes that consumers will be better protected with a single set of internet privacy rules, ones that encompass providers like Comcast and web services like Facebook.
“The federal government shouldn’t favor one set of companies over another — and certainly not when it comes to a marketplace as dynamic as the Internet,” Pai and Ohlhausen said in a joint statement earlier this month. “So going forward, we will work together to establish a technology-neutral privacy framework for the online world.”
But some privacy advocates and Democratic lawmakers view that stance as disingenuous. “If Republicans and the industry want to [work] hand in hand with consumers and come up with a comprehensive privacy regime, we’re happy to meet them at the table,” said Gaurav Laroia, policy counsel at Free Press. “But repealing the broadband privacy rules doesn't get us any closer and instead leaves a regulatory black hole where there is no effective privacy protections for customers of broadband ISPs.”
On the Senate floor Thursday Democratic Senator Ron Wyden defended the privacy regulations as a crucial tool for transparency and a way to give consumers some control over their digital footprint. “The broadband privacy rules are not some kind of blitzkrieg attack on monetizing consumer data,” he said. “But simply a recognition of the importance of consumer consent.”
With the Senate’s passage, the resolution to strip the FCC’s privacy rules will move to the House of Representatives next, and if it gets the expected votes there, the legislation would need Trump’s signature, which could then block the FCC from passing similar rules in the future.
Quelle: <a href="Senate Republicans Vote To Gut Internet Privacy“>BuzzFeed
Driving up network speed, reducing cost, saving power, expanding capacity, and automating management become crucial when you run one of the world’s largest cloud infrastructures. Microsoft has invested heavily in optical technology to meet these needs for its Azure network infrastructure. The goal is to provide faster, less expensive, and more reliable service to customers, and at the same time, enable the networking industry to benefit from this work. We’ve been collaborating with industry leaders to develop optical solutions that add more capacity for metro area, long-haul, and even undersea cable deployments. We have integrated these optical solutions within network switches and manage them through Software Defined Networking (SDN).
Our goal was to provide 500 percent additional optics capacity at 10x reduced power, a fraction of the previous footprint at a lower cost than what’s possible with traditional systems. Microsoft chose to ignore the chicken and egg problem and create a demand for 100 Gbps optics in a stagnant ecosystem unable to meet the demands of #cloud computing. In this blog, we explain the improvements we’ve made and where we’re boldly heading next.
Optical innovation leadership
We began thinking about how to more efficiently move network traffic between cloud datacenters, both within metro areas and over long distances around the world. We homed in on fiber optics, or “optics,” as an area where we could innovate, and decided to invest in our own optical program to integrate all optics into our network switching platforms.
What do we mean when we talk about optics? Optics is the means for transmitting network traffic between our global datacenters. Copper cabling has been the traditional means of carrying data and is still a significant component of server racks within the datacenter. However, moving beyond the rack at high bandwidth (for example, 100 Gbps and more) requires optical technologies. Optical links, light over fiber, replace copper wires to extend the reach and bandwidth of network connections.
Optical transmitters “push” pulses of light through fiber optic cables, converting high-speed electrical transmission signals from a network switch to optical signals over fiber. Optical receivers convert the signals back to electrical at the far end of the cable. This makes it possible to interconnect high-speed switching systems tens of kilometers apart in metro areas, or thousands of kilometers apart between regional datacenters.
To connect devices within the datacenter, each device has its own dedicated fiber. Since the light’s optical wavelength, or color, is isolated by the fiber, the color used to make the connections can be reused on every connection. By using a single color, optic manufacturers can improve costs of high-volume manufacturing. However, single-color has a high fiber cost, particularly as distances increase beyond 500 meters. Although this cost is manageable in intra-datacenter implementations where distances are shorter, the fiber used for inter-datacenter connections, metro and long-haul, is much more expensive.
Cost
We focused on optics because the cost can be 10x the cost of the switch port and even more for ultra-long haul. We began by looking for partners to collaborate on ultra-high integration of optics into new commodity switching platforms to break this pattern. Simultaneously, we developed open line systems (OLS) for both metro and ultra-long-haul transmission solutions to accept the cost-optimized optical sources. Microsoft partnered with several networking suppliers, including Arista, Cisco, and Juniper, to integrate these optics with substantially reduced power, a very small footprint, and much lower cost to create a highly interoperable ecosystem.
Figure 1. Traditional closed line system
In the past, suppliers have tried to integrate optics directly into switches, but these attempts didn’t include SDN capabilities. SDN is what enables network operators to orchestrate the optical sources and line system with switches and routers. By innovating with the OLS concept, including interfaces to the SDN controller, we can successfully build optics directly into commodity switches and make them fully interoperable. By integrating optics into the switch, we can easily manage and automate the entire solution at a large scale.
With recent OLS advances, we’re also able to achieve 70 percent more spectral efficiency for ultra–long haul connections of our datacenters between distant regions. We have drastically cut costs with this approach, more than doubling the capacity between datacenter when combining lower channel spacing with new modulation techniques that offer 150–200 Gbps per channel.
Figure 2. Open line system (OLS)
Power
In cloud-scale datacenters, power usage is a major consideration and can limit overall capacity of the solution. As such, we developed a new type of inexpensive 100 Gbps colored optic that fits into a tiny industry standard QSFP28 package to cover distances within metro area and the metro OLS that’s needed to support it. This solution completely replaces expensive long-haul optics for distances up to 80 km.
Due to miniaturization, integration, and innovations in both ultra–long-haul and metro optics, network operators can take advantage of these new approaches and use a fraction of the power (up to 10x less power) while giving customers up to 500 percent more capacity. We’ve expanded capacity and spectral efficiencies at lower overall cost in both capital expenditure and operating expenses than our current systems.
Space
The physical equipment necessary to connect datacenters between regions takes a large amount of space in some of the most expensive datacenter realty. Optical equipment often dominates limited rack space. The equipment necessary to connect datacenters within a region can also require a large amount of space and can limit the number of servers that can be deployed.
By integrating the metro optics and long-haul optics into commodity switching platforms, we’ve reduced the total space needed for optical equipment to just a few racks. In turn, this creates space for more switching equipment and more capacity. By miniaturizing optics, we’ve reduced the overall size of the metro switching equipment to half of its previous footprint, while still offering 500 percent more capacity.
Figure 3. Inphi’s ColorZ® product—large-scale integration of metro-optimized optics into a standard switch-pluggable QSFP28 package
Automation
Microsoft is focused on simplicity and efficiency in monitoring and maintenance in cloud datacenters. We recognized that a further opportunity to serve the industry lay in full automation of our optical systems to provide reliable capacity at scale and on demand.
Monitoring for legacy systems can’t distinguish optical defects from switching defects. This can result in delays in diagnosing and repairing hardware failures. For the optical space, this has historically been a manual process.
We saw that we could solve this problem by fully integrating optics into commodity switches and making them accessible with our SDN monitoring and automation tooling. By driving an open and optimized OLS model for optical networking equipment, we’ve ensured that the proper interfaces are present to integrate optical operations into SDN orchestration. Now automation can quickly mitigate defects across all networking layers, including service repair, with end-to-end work flow management. The industry benefits from this because optics monitoring and mitigations can now keep pace with cloud scale and growth patterns.
Industry impact
Microsoft has incorporated all these technologies into the Azure network, but the industry at large will benefit. For example, findings from ACG Research show that the Microsoft metro solution will result in a more than 65 percent reduction in Total Cost of Ownership. In addition, the research demonstrates power savings of more than 70 percent over 5 years.
Several of our partners are making available the building blocks of the Microsoft implementation of open optical systems. For example:
Cisco and Arista provide the integration of ultra–long-haul optics into their cloud switching platforms.
If your switches don’t support optical integration, several suppliers offer dense, ultra–long-haul solutions that enable disaggregation of optics from the OLS in the form of pizza boxes.
ADVA Optical Networking provides open metro OLS solutions that support Inphi ColorZ® optics and several other turnkey alternatives.
Most ultra–long-haul line systems have supported International Telecommunication Union–defined alien wavelengths (optical sources) for quite some time. Talk to your supplier for additional details.
If you’re interested in the deep, technical details behind these innovations, you can read the following technical papers:
Interoperation of Layer-2/3 Modular Switches with 8QAM/16QAM Integrated Coherent Optics over 2000 km Open Line System
Demonstration and Performance Analysis of 4 Tb/s DWDM Metro-DCI System with 100G PAM4 QSFP28 Modules
Transmission Performance of Layer-2/3 Modular Switch with mQAM Coherent ASIC and CFP2-ACOs over Flex-Grid OLS with 104 Channels Spaced 37.5 GHz.
Open Undersea Cable Systems for Cloud Scale Operation
Opening new frontiers of innovation
As these innovations in optics demonstrate, Microsoft is developing unique networking solutions and opening our advances for the benefit of the entire industry. Microsoft is working with our partners to bring even more integration, miniaturization, and power savings into future 400 Gbps interconnects that will power our network.
Read more
To read more posts from this series please visit:
Networking innovations that drive the cloud disruption
SONiC: The networking switch software that powers the Microsoft Global Cloud
How Microsoft builds its fast and reliable global network
Quelle: Azure