AWS Service Catalog kündigt die Integration mit AWS Organizations an.

AWS Service Catalog, ein Service zur Organisation, Verwaltung und Bereitstellung von Cloud-Ressourcen auf AWS, lässt sich ab sofort auch mit AWS Organizations integrieren. Mit dieser Funktion können Sie die gemeinsame Nutzung von AWS Service Catalog-Portfolios für alle Mitgliedskonten in Ihrem Unternehmen vereinfachen. AWS Organizations bietet richtlinienbasierte Verwaltung für mehrere AWS-Konten.
Ab sofort müssen AWS Service Catalog-Administratoren nicht mehr die empfangende Konto-ID oder Portfolio-ID zur Hand haben. Durch die manuelle Freigabe von Portfolio- und Konto-IDs können Sie Zeit sparen und das Fehlerrisiko reduzieren. Mit Hilfe des Master-Kontos Ihres Unternehmens können Sie Portfolios für Mitgliedskonten freigeben, indem Sie eine vorhandene Organisationseinheit oder Organisations-ID referenzieren, ohne den AWS Service Catalog zu verlassen.
Weitere Informationen zum AWS Service Catalog und zu AWS Organizations finden Sie hier.
Quelle: aws.amazon.com

AWS Database Migration Service erweitert den Support für gleichzeitige volle Lasten und die verbesserte LOB-Migration

Der Funktionsumfang des AWS Database Migration Service (DMS) wurde für die Replikations-Engine-Version 3.1.2 zur Verbesserung der Replikationsleistung und der Benutzererfahrung verbessert. Die Neuerungen werden im Folgenden dargestellt:
Verbesserte Migrationsgeschwindigkeit bei voller Last: Bei der Migration großer Tabellen ist DMS nun in der Lage, Tabellenpartitionen oder Unterpartitionen parallel zu laden und damit die Migrationsgeschwindigkeiten zu verbessern. Wenn eine Tabelle keine Partitionen oder Unterpartitionen enthält, können Sie Zeilenbereiche angeben, um die einzelnen segmentierten Bereiche gleichzeitig zu migrieren. 
Verbesserte LOB-Migrationen: Sie können nun die Einstellungen von großen Objekten (LOB) auf Tabellenebene steuern. Sie wurden früher für alle Tabellen in einer Aufgabe auf Aufgabenebene unterstützt. Wir haben außerdem einen neuen LOB-Modus eingeführt, der die Vorteile der eingeschränkten und vollständigen LOB-Modi in Vorgängerversionen kombiniert. Wenn beispielsweise eine LOB-Migration eine Kürzung im eingeschränkten LOB-Modus erkennt, wechselt DMS automatisch in den vollständigen LOB-Modus, schließt die Migration des jeweiligen großen LOB ab und kehrt zurück in den eingeschränkten LOB-Modus und setzt die Migration fort. 
Tabellenladereihenfolge steuern: Sie können nun die Reihenfolge der Tabellen steuern, die während der Volllastphase geladen werden. Wenn Ihre ausgewählten Tabellenlisten beispielsweise Tabellen mit verschiedenen Größen enthalten, können Sie die Ladereihenfolge so einrichten, dass die kleineren Tabellen vor den größeren Tabellen geladen werden.
Weitere Informationen zu den Verbesserungen in DMS finden Sie in unserem Blog. Informationen zur Verfügbarkeit von AWS DMS finden Sie in der AWS-Regionstabelle. 
Quelle: aws.amazon.com

Introducing the New Docker Hub

Today, we’re excited to announce that Docker Store and Docker Cloud are now part of Docker Hub, providing a single experience for finding, storing and sharing container images. This means that:

Docker Certified and Verified Publisher Images are now available for discovery and download on Docker Hub
Docker Hub has a new user experience

 
Millions of individual users and more than a hundred thousand organizations use Docker Hub, Store and Cloud for their container content needs. We’ve designed this Docker Hub update to bring together the features that users of each product know and love the most, while addressing known Docker Hub requests around ease of use, repository and team management.
Here’s what’s new:
Repositories

View recently pushed tags and automated builds on your repository page
Pagination added to repository tags
Improved repository filtering when logged in on the Docker Hub home page

Organizations and Teams

As an organization Owner, see team permissions across all of your repositories at a glance.
Add existing Docker Hub users to a team via their email (if you don’t remember their of Docker ID)

New Automated Builds

Speed up builds using Build Caching
Add environment variables and run tests in your builds
Add automated builds to existing repositories

Note: For Organizations, GitHub & BitBucket account credentials will need to be re-linked to your organization to leverage the new automated builds. Existing Automated Builds will be migrated to this new system over the next few months. Learn more
 
Improved Container Image Search

Filter by Official, Publisher and Certified images, guaranteeing a level of quality in the Docker images listed by your search query
Filter by categories to quickly drill down to the type of image you’re looking for

 
Existing URLs will continue to work, and you’ll automatically be redirected where appropriate. No need to update any bookmarks.
Verified Publisher Images and Plugins
Verified Publisher Images are now available on Docker Hub. Similar to Official Images, these images have been vetted by Docker. While Docker maintains the Official Images library, Verified Publisher Images are provided by our third-party software vendors.
Verified Publisher Images and Plugins:

Are tested and supported on Docker Enterprise platform by verified publishers
Adhere to Docker’s container best practices
Pass a functional API test suite
Complete a vulnerability scanning assessment
Are provided by partners with a collaborative support relationship

Let us know what you think
We’ll be rolling out the new Docker Hub to users over time at https://hub.docker.com.
Have feedback on these updates? We’d love to hear from you. Let us know in this short survey.
The post Introducing the New Docker Hub appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Azure Backup Server now supports SQL 2017 with new enhancements

V3 is the latest upgrade for Microsoft Azure Backup Server (MABS). Azure Backup Server can now be installed on Windows Server 2019 with SQL 2017 as its database. MABS V3 brings key enhancements in the areas of storage and security.

Security

Preventing critical volumes’ data loss

While selecting volumes for storage that should be used for backups by MABS, user may accidently select the wrong volume. Selecting volumes containing critical data may result in unexpected data loss. With MABS V3 you can prevent this by disabling these volumes to be available for backup storage, thus keeping your critical data secure from unexpected deletion.

TLS 1.2

Transport Layer Security (TLS) is the cryptographic protocol which ensures communication security over the network. With TLS 1.2 support, MABS V3 ensures more secured communication for backups. MABS now offers TLS 1.2 communication between Azure Backup Server and the protected servers, for certificate based authentication, and for cloud backups.

Storage

Volume migration

MABS V3 provides the flexibility to move your on-premises backups datasources to other storage for efficient resource utilization. For example, during storage upgrade, you can move datasources such as frequently backed up SQL databases to higher performant storage to achieve better results. You can also migrate your backups and configure them to be stored to a different target volume when a volume is getting exhausted and cannot be extended.

Optimized CC for RCT VMs

With resilient change tracking (RCT) mechanism in Hyper-V VMs, MABS optimizes the the network and storage consumption by transferring only the changed data during consistency check jobs. This reduces the overall need of time consuming consistency checks, thus making incremental backups faster and easier.

Related links and additional content:

If you are new to Azure Backup Server, refer Azure Backup Server documentation.
Want more details? Check out what’s new is MABS.
Need help? Reach out to the Azure Backup forum for support.

Quelle: Azure

Azure Functions now supported as a step in Azure Data Factory pipelines

Azure Functions is a serverless compute service that enables you to run code on-demand without having to explicitly provision or manage infrastructure. Using Azure Functions, you can run a script or piece of code in response to a variety of events. Azure Data Factory (ADF) is a managed data integration service in Azure that allows you to iteratively build, orchestrate, and monitor your Extract Transform Load (ETL) workflows. Azure Functions is now integrated with ADF, allowing you to run an Azure function as a step in your data factory pipelines.

Simply drag an “Azure Function activity” to the General section of your activity toolbox to get started.

You need to set up an Azure Function linked service in ADF to create a connection to your Azure Function app.

Provide the Azure Function name, method, headers, and body in the Azure Function activity inside your data factory pipeline.

You can also parameterize your function name using rich expression support in ADF. Get more information and detailed steps on using Azure Functions in Azure Data Factory pipelines.

Our goal is to continue adding features and improve the usability of Data Factory tools. Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.
Quelle: Azure

Automate Always On availability group deployments with SQL Virtual Machine resource provider

We are excited to share that a new, automated way to configure high availability solutions for SQL Server on Azure Virtual Machines (VMs) is now available using our SQL VM resource provider.

To get started today, follow the instructions in the table below.

High availability architectures are designed to continue to function even when there are database, hardware, or network failures. Azure Virtual Machine instances using Premium Storage for all operating system disks and data disks offers 99.9 percent availability. This SLA is impacted by three scenarios – unplanned hardware maintenance, unexpected downtime, and planned maintenance.

To provide redundancy for your application, we recommend grouping two or more virtual machines in an Availability Set so that during either a planned or unplanned maintenance event, at least one virtual machine is available. Alternatively, to protect from data center failures, two or more VM instances can be deployed across two or more Availability Zones in the same Azure region, this will guarantee to have Virtual Machine Connectivity to at least one instance at least 99.99 percent of the time. For more information, see the “SLA for Virtual Machines.”

These mechanisms ensure high availability of the virtual machine instance. To get the same SLA for SQL Server on Azure VM, you need to configure high availability solutions for SQL Server on Azure VM. Today, we are introducing a new, automated method to configure Always On availability groups (AG) for SQL Server on Azure VMs with SQL VM resource provider (RP) as a simple and reliable alternative to manual configuration.

SQL VM resource provider automates Always On AG setup by orchestrating the provisioning of various Azure resources and connecting them to work together. With SQL VM RP, Always On AG can be configured in three steps as described below.

Steps
SQL VM RP resource type
Method to deploy
Prerequisites

Step 1 – Windows Failover Cluster
SqlVirtualMachineGroup
Automated – ARM template
VMs should be created from SQL Server 2016 or 2017 Marketplace images, should be in the same subnet, and should join to an AD domain.

Step 2 – Availability group
N/A
Manual
Step 1

Step 3 – Availability group Listener
SqlVirtualMachineGroup/AvailabilityGroupListener

3.1 Manual – Create Internal Azure Load Balancer resource

3.2 Automated – ARM Template Create and Configure AG Listener

3.1 Manual – None

3.2 Automated – Step 2

Prerequisites

You should start with deploying SQL VM instances that will host Always On AG replicas from Azure Marketplace SQL Server VM images. Today, SQL VM resource provider supports automated Always On AG only for SQL Server 2016 and SQL Server 2017 Enterprise edition.

Each SQL VM instance should be joined to an Active Directory domain either hosted on an Azure VM or extended from on-premises to Azure via network pairing. VM instances can be joined to the Active Directory domain manually or by running the Azure quick start domain join template.

All SQL VM instances that will host Always On AG replicas should be in the same VNet and the same subnet.

1. Configure a Windows Failover Cluster

Microsoft.SqlVirtualMachine/SqlVirtualMachineGroup resource defines the metadata about the Windows Failover Cluster, including the version and edition, fully qualified domain name, AD accounts to manage the cluster, and the storage account as the cloud witness. Joining the first SQL VM to the SqlVirtualMachineGroup will bootstrap the Windows Failover Cluster Service; and join the VM to the cluster. This step can be automated with an ARM template available in Azure Quick Starts as 101-sql-vm-ag-setup.

2. Configure an Always On AG

As Windows Failover Cluster service will be configured at the first step, an Always On AG can simply be created via SSMS on the primary Always On AG replica. This step needs to be manually performed.

3. Create an Always On AG listener

Always On AG listener requires an Azure Load Balancer (LB). Load Balancer provides a “floating” IP address for the AG listener that allows quicker failover and reconnection. If the SQL VMs a part of the availability group are in the same availability set, then you can use a Basic Load Balancer. Otherwise, you need to use a Standard Load Balancer. The Load Balancer should be in the same VNet as the SQL VM instances. SQL VM RP supports Internal Load Balancer for AG Listener. You should manually create the ILB before provisioning the AG Listener.

Provisioning a Microsoft.SqlVirtualMachine/Sql Virtual Machine Groups/AvailabilityGroupListener resource by giving the ILB name, availability group name, cluster name, SQL VM resource ID, and the AG Listener IP address and name creates and configures the AG listener. SQL VM RP handles the network settings, configures the ILB back end pool and health probe, and finally creates the AG Listener with the given IP address and name. As the result of this step, any VM within the same VNet can connect to the Always On AG via the AG Listener name. This step can be automated with an ARM template available on the Azure quick starts as 101-sql-vm-aglistener-setup.

Automated Always On AG with SQL VM RP simplifies configuring Always On availability groups by handling infrastructure and network configuration details. It offers a reliable deployment method with right resource dependency settings and internal retry policies. Try deploying automated Always On availability groups with SQL VM RP today to improve high availability for SQL Server on Azure Virtual Machines.

Start taking advantage of these expanded SQL Server Azure Virtual Machine capabilities enabled by our resource provider today. If you have a question or would like to make a suggestion, you can contact us through UserVoice. We look forward to hearing from you!
Quelle: Azure

Streamlined IoT device certification with Azure IoT certification service

For over three years, we have helped customers find devices that work with Azure IoT technology through the Azure Certified for IoT program and the Azure IoT device catalog. In that time, our ecosystem has grown to one of the largest in the industry with more than 1,000 devices and starter kits from over 250 partners.

Today, we are taking steps to further to grow our device partner ecosystem with the release of Azure IoT certification service (AICS), a new web-based test automation workflow, which is now generally available. AICS will significantly reduce the operational processes and engineering costs for hardware manufacturers to get their devices certified for Azure Certified for IoT program and be showcased on the Azure IoT device catalog.

Over the past year, we’ve made significant improvements to the program such as improving the discovery of certified devices in the Azure Certified for IoT device catalog and expanding the program to support Azure IoT Edge devices. The goal of our certification program is simple – to showcase the right set of IoT devices for our customers’ industry specific vertical solutions and simplify IoT device development.

AICS is designed and engineered to help achieve these goals, delivering on four key areas listed below:

Consistency

AICS is a web-based test automation workflow that can work on any operating systems and web browser. AICS communicates with its own set of Azure IoT Hub instances to automatically validate against devices to Azure IoT Hub bi-directional connectivity and other IoT Hub primitives.

Previously, hardware manufacturers had to instantiate their own IoT Hub using their Azure subscription in order to get certified. AICS not only eliminates Azure subscription costs for our hardware manufacturers, but also streamlines the certification processes through automation. These changes accrue to driving more quality and consistency compared to the manual processes that were in place before.

Additional tests

The certification program for IoT devices has always validated against bi-directional connectivity from device to IoT Hub cloud service (namely device-to-cloud and cloud-to-device). As IoT devices become more intelligent to support more capabilities, we have now expanded our program to support validation of device twins and direct methods IoT Hub primitives. AICS validate these capabilities and Azure IoT device catalog will correspondingly showcased them as well that make it easy for device seekers to build IoT solutions on these rich capabilities.

The screenshot below shows customizable test cases. By default, device-to-cloud is the required test and all others are optional. This new requirement allows constrained devices such as microcontrollers to be certified.

The screenshot below shows how tested capabilities are shown on the device description page in the device catalog.

Flexibility

Previously, hardware manufacturers were required to use the Azure IoT device SDK to build an app to establish connectivity from device(s) to cloud managed by Azure IoT Hub services. Based on partners’ feedback, we now support devices that do not use Azure IoT device SDK to establish connectivity to Azure IoT Hub, for example, devices that use the IoT Hub resource provider REST API to create and manage Azure Hub programmatically or hardware manufacturers opt to use other device SDK equivalent to establish connectivity.

In addition, AICS allows hardware manufacturers to configure the necessary parameters for customized test cases such as number of messages of telemetry data sent from the devices.

The screenshot below illustrates an example page that shows the ability to configure each test case.

Simplicity

Finally, we have made investments to design a user experience that is simple and intuitive to hardware manufacturers. For example, in the device catalog, we have streamlined the process from device registration to running the validations using AICS through a simple wizard driven flow. Hardware developers can easily troubleshoot failed tests through detailed logs that improves diagnose-ability.

Because it’s a web-based workflow, serviceability of AICS is so simple that hardware manufacturers are not required to deploy any standalone test kits (no .exe, .msi, etc.) locally on their devices, which tend to become outdated over time.

The screenshot below shows each test case run. Log files show the test pass/fail along with raw data sent from device to cloud. The submit button only shows up when all the test cases selected pass. Once the tests are complete, we will review the results and notify the submitter of additional steps to complete the entire certification process.

Next steps

Go to Partner Dashboard to start your submission.

Effective immediately, all new incoming submissions for certification must be validated via AICS. We also highly recommend that existing certified IoT devices re-certify using AICS because doing so allows us to showcase your additional hardware capabilities.

You can learn more about AICS in this demo video.

If you have any questions, please contact Azure Certified for IoT iotcert@microsoft.com.
Quelle: Azure

Creating a smart grid with technology and people

This blog post was authored by Peter Cooper, Senior Product Manager, Microsoft IoT.

It’s 1882. Thomas Edison has just surpassed his breakthrough invention—the first incandescent lightbulb—by collaborating with J.P. Morgan to open the first industrial-scale power station in the United States.

Flash forward to today: Power generation, distribution, transmission, and consumption now drive business and modern life around the globe. The industry operates on a vast scale, with a complex web of relationships and technology that enables instant, reliable delivery throughout much of the developed world.

And that grid that got its start back in the 19th century? It’s sorely in need of a massive update. Utilities and their partners are searching for new solutions that can meet 21st-century energy challenges: surging demand for electricity, two-way energy flow, increased use of clean energy sources, and stairstep approaches to creating a smart grid to tackle the thorniest challenges first. Here’s a look at the digital transformation of the power and utilities industry that is picking up steam.

The need to make electrical power more sustainable

The current model of power production and delivery won’t sustain fast-paced business and human population growth. Power systems are already coping with spikes, surges, and even blackouts. Who can forget the Northeast blackout of 2003?

Moreover, the existing grid is wasteful, with 285 percent more power loss today than in 1984. Such inefficiency has highly negative consequences for consumers’ need for reliability, climate change, and businesses’ bottom lines.

A 150-year-old industry goes high tech

Fortunately, new technologies are emerging to manage demand, reduce waste, and harness new energy sources and producers. Here are just a few:

Smart meters that communicate their condition via wireless networks, providing consumers with real-time data and aiding in faster resolution of power disruption issues.
Connected home systems that use sensor-tagged equipment and AI-powered smart assistants to fine-tune energy use throughout the house, even achieving “zero net” energy use.
State-of-the-art batteries that store energy, for future use or sale back to the grid.
Microgrids that combine solar panels, fuel cells, and battery energy storage to power neighborhoods.
Connected cars that reduce energy use, can be charged systematically, and serve as movable energy storage devices.
Next-generation distribution and transmission infrastructures that will enable two-way power flow.
The smart grid, which combines multiple innovations to enable systematic load balancing, peak leveling programs, and full leverage of sustainable energy sources.

All of these developments—and more—are making it possible to deliver electricity to the right customer at the right time and in the right manner. Power companies now also can accommodate the two-way flow of energy, as grassroots producers, both businesses and individuals, deploy their own small-scale energy production. These capabilities are being amplified by a new IoT platform, Azure Digital Twins, that uses spatial intelligence to model complex relationships between people, places, and devices in the energy value chain. Let’s take a closer look.

A grid made smarter by digital technology

Without question, the legacy grid needs to be modernized with state-of-the-art infrastructure to improve effectiveness and ensure a reliable flow of continuous power. Creating a smart grid is slated to cost between $476 and $880 billion, and it will take years to achieve. But new Internet of Things (IoT) digital technology can “smarten” today’s grid faster and at a lower cost. It can also connect all the players—electricity producers, customers, and transmission and distribution companies—providing continuous feedback to help them make more sustainable choices now.

Agder Energi, a hydropower company in Norway, already has used sensor-linked equipment and Microsoft technology, including Microsoft Azure, Power BI, and Azure IoT Hub, to improve energy forecasting, adapt energy production to changing needs, and empower consumers with insights to manage their energy usage. Now, with Azure Digital Twins, Agder Energi is taking those capabilities to a new level. The technology enables Agder Energi to model grid assets and distributed energy resources and optimize them where needed.

Why is this important? Azure IoT enables power companies like Agder Energi to rapidly identify and address sources of waste, right-size production, prioritize investments, and incorporate new energy producers and sources. For example, if a power company finds that a substation is a major source of energy leakage, the company could fast-track upgrades. Or if demand unexpectedly surges, the power company may elect to add more resources, such as wind or solar energy, to ensure continuous electrical delivery.

The rise of new energy “prosumers”

Where will generation companies harness new energy sources? Meet the new prosumers: businesses and individuals who are both consumers and producers of energy.

Businesses may elect to lease land to a wind farm, use solar panels across company buildings, or run fleets of electric vehicles that both use and store energy. Similarly, consumers are increasingly buying solar panels and electric cars to be more sustainable, as well as using smart meters and analytics to monitor and reduce consumption. Both of these groups are likely to store and sell excess energy back to the grid. While in its infancy, the prosumer market is expected to take off in the near future. Mass adoption of autonomous cars could really galvanize this movement.

Allego is a European provider of smart charging solutions and electric vehicle cloud solutions in Europe. The company uses Azure IoT to model all key participants in the charging network, such as regions, charging stations, vehicles, and others to simplify the business complexity of planning and executing charging. The solution enables charging stations to more precisely plan energy delivery, prioritize public vehicles such as buses over others, and charge consumer vehicles according to driver preference, among other benefits.

Smart grid technology means new choices

In the very near future, power generation companies will have greater options in how they run their businesses, using IoT-enabled insights to strategically stairstep their way to creating a smart grid and ensure business continuity. Meanwhile, prosumers will be able to align their values and behavior and benefit financially from sustainable choices, encouraging others to do the same.

Learn about Microsoft’s work on sustainable energy management.
Quelle: Azure