Microsoft’s newest sustainable datacenter region coming to Arizona in 2021

 

On our journey to become carbon negative by 2030, Microsoft is continually innovating and advancing the efficiency and sustainability of our cloud infrastructure, with a commitment to use 100 percent renewable energy in all of our datacenters and facilities by 2025. Today, we are taking a significant step toward that goal, revealing plans for our newest sustainable datacenter region in Arizona, which will become our West US 3 region.

 

Companies are not only digitally transforming their operations and products to become more sustainable—they’re also choosing partners with shared goals and values. In developing the new West US 3 region, we have water conservation and replenishment firmly in mind. Today, Microsoft announced an ambitious commitment to be water positive for our direct operations by 2030. We’re tackling our water consumption two ways: reducing our consumption and replenishing water in the regions we operate. Since we announced our plans to invest in solar energy in Arizona to build more sustainable datacenters last year, we have been working with the communities of El Mirage and Goodyear on water conservation, education and sustainability projects to support local priorities and needs.

 

 

Sustainable design delivering the full Microsoft cloud for global scale, security and reliability

Our datacenter design and operations will contribute to the sustainability of our Arizona facilities. In Arizona, we’re pursuing Leadership in Energy and Environmental Design (LEED) Gold certification, which will help conserve additional resources including energy and water, generate less waste and support human health. We’re also committed to zero waste-certified operations for this new region, which means a minimum of 90 percent of waste will be diverted away from landfills through reduction, reuse and recycling efforts.

The new datacenter region will deliver enterprise-grade cloud services, all built on a foundation of trust:

Microsoft Azure, an ever-expanding set of cloud services that offers computing, networking, databases, analytics, AI and IoT services.
Microsoft 365, the world’s productivity cloud that delivers best-of-breed productivity apps integrated through cloud services and delivered as part of an open platform for business processes.
Dynamics 365 and Power Platform, the next generation of intelligent business applications that enable organizations to grow, evolve and transform to meet the needs of customers and capture new opportunities.
Compliance, security and privacy, Microsoft offers more than 90 certifications and spends $1 billion every year on cybersecurity to address security at every layer of the cloud.

To support customer needs for high-availability and resiliency in their applications, the new region will also include Availability Zones, which are unique physical locations of datacenters with independent power, network, and cooling for additional tolerance to datacenter failures.

Our construction partner Nox Innovations is helping build these sustainable datacenters with the help of Microsoft HoloLens 2, Microsoft Dynamics 365 Remote Assist and Microsoft mixed reality partner solution VisualLive, to visualize building information modeling (BIM) data in the form of holograms and overlay the 3D assets in the context of the physical environment. VisualLive’s solution is powered by Azure Spatial Anchors, a new Azure mixed reality service that maps, persists and restores 3D experiences in the real-world, VisualLive’s solution. The hands-free and remote work environment enabled by HoloLens 2 and cloud services enables virtual collaboration that has led to greater efficiency, safety and accuracy.

Delivering renewable solar energy and replenishing water in Arizona

Our commitment in Arizona includes a sustainable datacenter design and operations as well as several local initiatives to support water conservation. First, Microsoft is collaborating with First Solar, an Arizona-headquartered global leader in solar energy, on their Sun Streams 2 photovoltaic (PV) solar power plant, which will offset the day one energy usage of the new campus, available in 2021, with solar energy once the facility is operational. Clean solar PV energy displaces the water needed in the traditional electricity generation process. First Solar’s lowest carbon solar PV technology does not require water to generate electricity and is ideally suited to meet the growing energy and water needs of arid, water-limited regions. By displacing conventional grid electricity in Arizona, First Solar’s Sun Streams 2 Project is expected to save 356 million liters of water annually.

Microsoft’s Arizona datacenters will use zero water for cooling for more than half the year, leveraging a method called adiabatic cooling, which uses outside air instead of water for cooling when temperatures are below 85 degrees Fahrenheit. When temperatures are above 85 degrees, an evaporative cooling system is used, which is similar to “swamp coolers” in residential homes. This system is highly efficient, using less electricity and a fraction of water used by other water-based cooling systems, such as cooling towers.

For the last year, we have also been investing in water conservation to have a longer-lasting impact on replenishing water in Arizona to sustain water levels in Lake Mead, with the goal of supporting the state to meet its Drought Contingency Plan Commitments. Microsoft’s investment in this project has also generated a one-to-one cash match from the Water Funder Initiative that will support the state’s efforts and further expand project impact. The project will benefit the Colorado River Indian Tribes, ultimately resulting in more water in Lake Mead and more efficient water infrastructure.

Lastly, Microsoft and Gila River Water Storage, LLC are recharging and replenishing groundwater levels in the Phoenix Active Management Area with long term storage credits dedicated to the cities of Goodyear and El Mirage to balance a portion of Microsoft’s future water use, contributing an estimated additional 610,000 cubic meters. Microsoft is also collaborating with The Nature Conservancy to support water conservation in the Verde River Basin, installing a new pipe in the leakiest part of the Eureka Ditch to increase resilience for local farmers.

Supporting local growth, opportunities in Arizona

Through our Datacenter Community Development initiative, we are actively engaged in El Mirage, Goodyear, and across Arizona to advance community priorities in education, workforce development and further community connection. These investments in local projects total more than $800,000 and employee volunteer time as well as community partnerships to clean up the Gila River, provide WiFi connectivity for 1,000 students across the Navajo Nation and support the expansion of Mathematics, Engineering, Science Achievement (MESA) to serve 1,500+ middle school and high students across Arizona. In addition, Microsoft is collaborating with two Maricopa Community Colleges, including Estrella Mountain Community College in Avondale and Glendale Community College in Glendale, to develop workforce training that prepares workers for jobs in the IT sector, including work in Microsoft datacenters.

The new datacenter region and related work is expected to create over 100 permanent jobs across a variety of functions, including mechanical engineers, electrical engineers and datacenter technicians, when the facilities are fully operational, and more than 1,000 construction jobs over the initial building phases. Once the datacenters are operating, they’re expected to have an annual economic impact of approximately $20 million across communities in Arizona.
Quelle: Azure

Azure Container Instances – Docker integration now in Docker Desktop stable release

We’re happy to announce the new stable release of Docker Desktop includes the Azure Container Instances – Docker integration. Install or update to the latest release and get started deploying containers to Azure Container Instances (ACI) today.

Azure Docker integration

The Azure Docker integration enables you to deploy serverless containers to Azure Container Instances (ACI) using the same Docker Command-line (CLI) commands from local development. Use docker run to spin up a single-container or docker compose up to deploy multi-container applications defined with a Docker Compose file. You can also view logs, attach a shell, and perform other actions against the containers running in ACI, just as if those containers were running locally. In addition, you can now use Compose to attach Azure File Share volume mounts to your containers in either a local or ACI context.

With Azure Container Instances (ACI), you can run your dev/test or production containers in the cloud without needing to set up any infrastructure. ACI caters to developers who need to quickly run containers in the cloud with minimal operational overhead, therefore there is no infrastructure or platform management overhead. ACI integrates with other Azure services for your production workload needs, such as Azure File Share volumes to persist your data and Log Analytics for your monitoring needs. ACI has a pay-as-you-go pricing model, which means you will only be billed for CPU and memory while containers are running, and not one second more. 

Docker extension for Visual Studio Code

In addition to Docker releasing an update to Docker Desktop, Microsoft has released an update to our Docker extension for Visual Studio Code. With the new 1.6 release of the extension, you can now right-click on an image from Azure Container Registry (ACR) or Docker Hub and deploy it directly to Azure Container Instances (ACI).

As you can see in the following animation, the extension first prompts you to select an existing ACI context or create a new one. This context is then set as the active context, and the tools use the docker run command to spin up a container in ACI. Prior to running the container, the extension inspects the image to determine if any ports should be opened. This way your running container can be accessed by the port(s) expected.

This new feature is in addition to other Azure Container Instances (ACI) features we have added in past Docker extension releases. This latest release provides a complete toolset for creating, deploying, and diagnosing containers in ACI from within Visual Studio Code.

Try it today

If you haven’t already, be sure to download the Visual Studio Code Docker extension and the stable release of Docker Desktop and get started deploying containers to Azure Container Instances (ACI) using the Docker CLI or Visual Studio Code. A great way of getting started is to use the Azure Container Instances (ACI) quickstart. We encourage you to leave your comments below or submit an issue on the GitHub repo.
Quelle: Azure

Better outcomes with AI: Frost & Sullivan names Microsoft the leading AI platform for healthcare IT

In early 2020, Frost & Sullivan recognized Microsoft as the “undisputed leader” in global Artificial Intelligence (AI) platforms for the Healthcare IT (HCIT) sector on the Frost Radar™. In a field of more than 200 global industry participants, Frost & Sullivan independently plotted the top 20 companies across various parameters indicative of growth and innovation, available for consumption here.

According to Frost & Sullivan, the global AI HCIT market is on a rapid growth trajectory, with sales of AI-enabled HCIT products expected to generate more than $34.83 billion globally by 2025. Government agencies will contribute almost 50.7 percent of the revenue (including public payers), followed by hospital providers (36.3 percent) and physician practices (13 percent). Clinical AI solutions will drive 40 percent of the market revenue, with financial AI solutions contributing the same, and the remaining 20 percent coming from sales of operational AI solutions. Globally, Microsoft earned the top spot because of its industry-leading effort to incorporate next-generation AI infrastructure to drive precision medicine workflows, aid population health analytics, propel evidence-based clinical research, and expedite drug and treatment discovery.

Figure 1: The Frost Radar, "Global AI for Healthcare IT Market", 2020

We’re seeing providers deploy chatbots in their virtual portals to extend 24/7, personalized care to patients, helping them triage a larger volume of inquiries and even extend care services to previously inaccessible remote areas. With the power of predictive analytics, care teams can predict patient volumes and provide preventative care to provide timely escalations of care and prevent unnecessary readmissions. AI has provided tools for scientists at the forefront of precision medicine, accelerating drug discovery, while aiding public health officials with modeling and predicting the progression of disease. In BioPharma and MedTech, AI is being used to provide real-time insights around equipment use for manufacturing R&D departments, while also deploying field technicians to service costly equipment via predictive maintenance and enabling healthcare customers to track inventory and medication across supply chains with greater transparency and agility.

The report cites numerous recent innovations from Microsoft, including the Microsoft Cloud for Healthcare offering, announced in 2020. The Microsoft Cloud for Healthcare brings together trusted and integrated capabilities for customers and partners that enrich patient engagement and connects health teams to help improve collaboration, decision-making, and operational efficiencies. It makes it faster and easier to provide more efficient care and help ensure end-to-end security, compliance, and accessibility of health data.

At Microsoft, we are focused on trust and on empowering our healthcare customers—never monetizing customer or patient data. The Microsoft Cloud for Healthcare also offers an infrastructure built on industry leading scale, with over $15 billion invested in cloud infrastructure and over 1 million physical servers across over 60 global regions. Furthermore, Microsoft has the largest partner ecosystem in the market, with global partners equipped to work with health organizations of all sizes.

Healthcare AI at Microsoft

Microsoft’s growing portfolio of healthcare AI offerings also includes specific services such as:

The Microsoft Health Bot enables health organizations to build and deploy AI-powered, compliant conversational healthcare experiences. With built-in medical intelligence and natural language capabilities, and extensibility tools, the Health bot enables health organizations to build personalized and trusted conversational experiences across digital health portals. Customers such as Premera Blue Cross have leveraged the Microsoft Health Bot to create their own chatbot, Premera Scout, to help customers quickly obtain information on claims, benefits, and other services offered by Premera across their digital portals. In another instance, Walgreens Boots Alliance (WBA) incorporated the Microsoft Healthcare Bot to add a COVID-19 Risk Assessment capability to their website, helping customers quickly find answers to common questions.
Text Analytics for Health is a feature of Azure Cognitive Services that helps health organizations process and extract insights from unstructured medical data (such as; doctor’s notes, medical publications, electronic health records, clinical trial protocols, and more). This enables researchers, analysts, and medical professionals to unlock scenarios based on entities in health data, such as matching patients to clinical trials and extracting insights from large bodies of clinical literature, as was the case when the University College London (UCL) leveraged Text Analytics for health to build a system that identifies relevant research for reviews as and when they are published.
Azure Cognitive Services offers easy-to-deploy AI tools for speech recognition, computer vision, and language understanding. Nuance, a leading provider of AI-powered clinical documentation and decision-making support for physicians, leveraged the Azure Cognitive Services platform to develop their Dragon Medical One platform, one of the leading services of Ambient Clinical Intelligence. The platform allows doctors to enter and search for relevant patient information in electronic health records, using dictation. This enables physicians to reduce time spent on administrative capabilities and redirect more time toward interacting with the patient. The platform can also mine a patient’s medical history with new reported symptoms at an appointment to provide recommendations of potential diagnoses for the doctor to consider.

Partners empowering healthcare AI

We’re also proud to see many of our healthcare partners recognized in the report, with whom we have partnered to design and build our portfolio of AI services and who, in turn, leverage our platforms to infuse AI in their solutions. These include, but are not limited to:

Nuance is partnering with Microsoft to deliver ambient clinical intelligence (ACI), paving the way for the exam room of the future. Take a look at our partner spotlight, Microsoft and Nuance partner to deliver ambient clinical intelligence.
GE Healthcare is developing advanced solutions for secure imaging and data exchange built on Azure.
Optum, the Health Services platform of UnitedHealth Group, joined forces with Microsoft to launch ProtectWell, a return-to-workplace protocol that enables employers to bring employees back to work in a safe environment. Leveraging clinical and data analytics capabilities, as well as the Microsoft Healthcare Bot service for AI-assisted Covid-19 triaging. Take a look at our partner spotlight, UnitedHealth Group and Microsoft join forces to launch ProtectWell.
Allscripts extended their long-term strategic alliance to harness the power of Microsoft’s platform to develop Sunrise, an integrated EHR that provides a clinician-friendly, evidenced-based platform with integrated analytics for delivering better health outcomes in hospitals. Connecting all aspects of care—including; acute, surgical, pharmacy, and laboratory services, to revenue and patient administration systems.
Philips is empowering providers through image-guided, minimally invasive therapies, bringing live imaging and other sources of data into 3D holographic environments controlled by physicians. Take a look at our partner spotlight, Microsoft HoloLens 2: Partner Spotlight with Philips.

We’re honored to have been recognized as a leader in the healthcare space and are proud to work with a growing ecosystem of partners and customers that are building the next generation of healthcare solutions. Together, we’re extending the reach of healthcare services, unlocking new clinical insights, and empowering care teams to drive better outcomes for the communities they serve. Innovation is a journey without end, and we’re committed to building the trusted tools and platforms to help healthcare organizations be future-ready and invent with purpose.

Next steps with Microsoft AI

To learn more about Microsoft AI offerings, explore the following resources:

Microsoft AI for Health page.
Learn more about the Azure AI platform.
Explore even more Azure for Health offerings, from IoT to Mixed Reality.
Read the latest updates on the Microsoft Healthcare blog.
Learn more about the Microsoft Cloud for Healthcare.

Quelle: Azure

Preparing for what’s next: Building landing zones for successful cloud migrations

As businesses look to the cloud to ensure business resiliency and to spur innovation, we continue to see customer migrations to Azure accelerate. Increasingly, we’ve heard from business leaders preparing to migrate that they could learn from our best practices and want general help thinking about migration, and we started a blog series to help share those even more broadly. In our kick-off blog for this series, we shared that landing zones are a key component to anticipating and mitigating complexities as part of your migration. In this blog, we will cover what landing zones are and the importance of getting cloud destinations ready in advance of the physical migration, as that generates significant benefit in the long-term.

IT and business leaders often ask us about how they can both enable their teams to innovate with agility in Azure and remain compliant within organizational governance, security, and efficiency guardrails. Getting this balance right is critical to cloud migration success. One of the most important questions to getting it right is how to set up destination Azure environments we call landing zones.

At Microsoft, we believe that cloud agility isn’t at odds with setting up the right foundation for migration initiatives—in fact, taking time to do the latter sets organizations up for a faster path to success. Our customers and partners have been using Azure landing zones—a set of architecture guidelines, reference implementations, and code samples based on proven practices—to prepare cloud environments.

“With everybody’s limited budget, especially during the pandemic, the support from both a financial perspective and with FastTrack for Azure backing. I very quickly realized that we could deliver in a quicker timeframe than initially planned. The landing zone was a great initiative because that focused everybody in terms of what are the deliverables? What are we looking to achieve? What technologies are we going to use to do that? Microsoft linked in seamlessly with SoftwareOne and as a customer of both of these companies, it was reassuring for us.” – Gavin Scott, Head of IT, Actavo

What are the key decisions to be made in setting-up your cloud destination?

At the onset of migration initiatives, we see customers and partners focus on the key considerations below to define their ideal operating environment in Azure. These considerations are abstracted as operating models, with “central operations” and “enterprise operations” as two options at different ends of the spectrum.

Old roles versus new opportunities: Migrating to the cloud can modernize many workloads as well as how IT operates. Azure can reduce the volume of repetitive maintenance tasks, unlocking opportunities to apply IT staff expertise in new ways. At the same time, Azure does offer options to preserve practices, controls, and structures that are proven to work. A key decision for leaders is where to land on this spectrum.
Change management versus democratized actions: With greater access to self-service deployment and flexibility for decisions, change management and change control can look different in the cloud. While workload teams typically prefer the agility to quickly make changes to workloads and environments, cloud centers of excellence seek to ensure changes are safe, compliant, and operationally efficient. The key decision for leaders here is how much of cloud governance requirements should be automated.
Standardized versus specialized operations: Creating multiple and connected levels of operational controls in Azure to accommodate specialized needs of various workloads is absolutely possible. Central IT, for instance, can ensure basic operational standards for all workloads, while empowering workload teams to set additional guardrails. The key question for leaders is which day-to-day operations will be performed by central IT teams and which by workload teams.
Architecture; as-is versus re-imagined: The first inclination for most teams might be to simply replicate on-premises design and architectures, “as-is” in Azure. When a low complexity and narrowly scoped estate is moving to cloud, that might be the optimal approach. In time, as migration scopes grow—spanning more applications, databases, and infrastructure components—achieving higher efficiency in Azure becomes even more attractive. A key decision for leaders is which path to take during iterative migration initiatives.

Azure landing zones appropriately guide customers and partners in setting up the desired operating model in Azure. Landing zones ensure that roles, change management, governance, and operations are all considered at the beginning of the journey to achieve the desired balance of agility and governance.

Why are Azure landing zones valuable in implementing your design decisions in the cloud?

Examples from two of our customers on each end of the operating model spectrum illustrate how landing zones guide destination decisions, as well as the implementation path.

The first example is a US-based large manufacturing and distribution company, with operations spanning four continents. This customer aimed to establish “central operations” while retiring a series of data centers that would have otherwise required expensive hardware upgrades. One of the complicating (though not uncommon) factors was each regional subsidiary had distinct governance, security, and operations requirements.

To accelerate this complex migration, with the help of our partners, we started by migrating a single subsidiary, enabling the customer to learn and iterate towards the desired centralized operating model. During the first four weeks, the customer migrated hundreds of low-risk VMs to an Azure landing zone. Within eight weeks, the customer established the final operating model, migrating mission-critical, and sensitive data workloads for their first subsidiary. Other subsidiaries then built on this initial operating model to meet their specific needs. The customer now uses Azure Blueprints and Azure Policy to deploy self-service landing zones to comply with global and local standards. Azure landing zones enabled the customer to successfully mitigate complexity and mold the cloud platform architecture to fit the centralized operating model they were looking for.

The second example comes from one of our customers in Germany preparing to move thousands of servers to Azure. Most of those servers hosted low-complexity, steady-state workloads governed by central operations on-premises. As part of the migration effort, the customer needed to transform and modernize IT operations, including adherence to high security and compliance requirements that were to take effect. In eight weeks, this customer was able to start an Azure environment in alignment with the transformation vision while meeting the new security and compliance requirements. The enterprise-scale flavor of Azure landing zones provided implementation options needed for the destination to meet stringent requirements and enabled the enterprise transformation vision.

For an overview of landing zone and considerations you should make to build your landing zone in Azure, view this Azure landing zones video. 

How are Azure landing zones constructed?

To construct Azure landing zones, customers and partners first clarify how they prefer to deploy their landing zones. Next up are decisions on “design area” configuration options. Let’s take a look at a couple of the “design areas” to demonstrate how they contribute to the construction of landing zones.

Deployment options: How to deploy Azure landing zones is an important early design decision. Each implementation option provides slightly different methods to match the skill level of your team and the operating model. User-interface based options and scripting-based methods, as well as deployments directly from GitHub are available.
Identity: Best practice guidance and enabling capabilities Azure Active Directory, Azure role-based access control (RBAC), and Azure Policy help establish and preserve the right levels of identity and access across the cloud platform. The best practices, decision guides, and references in Azure landing zones help design the foundation with a secure and compliant approach.
Resource organization: Sound governance starts with standards for organizing resources. Naming and tagging standards, subscription design (segmentation of resources), management group hierarchy (consistent organization of segments) are needed to reflect operating model preferences. Landing zones provide the guidance to get started.
Business continuity and disaster recovery (BCDR): Reliability and rapid recovery are essential for business continuity. Design areas within landing zones guide customers to set up destination environments with high degrees of protection and faster recovery options.

“The landing zone that serves as a foundation for customers’ identity, security, networking, operations and governance needs, tends to be a lynchpin of success for future migrations. Claranet prides on getting this right in addition to helping build an excellent post migration operational model. Our collaboration with the Azure Migration Program (AMP) team was tremendously helpful to our customers, bringing the best of what we have with Microsoft’s recommendations and focusing on landing zone to better prepare for their growing cloud portfolio.”—Mark Turner, Cloud Business Unit Director, Claranet

Getting started with Azure landing zones

To guide our customers and partners in getting cloud destination environments ready with Azure landing zones, ready section under the Cloud Adoption Framework (CAF) provides step-by-step, prescriptive guidance. We recommend that customers start with the following three steps within CAF to educate and activate their migration crews:

Begin by determining which cloud operating model reflects the right balance for your agility and governance needs.
Continue onto "design areas" for Azure landing zones for an overview of the configuration options available to achieve your operating model.
Select an Azure landing zone implementation option to match your selected operating model, migration scope, and velocity. Once you’ve identified the best option, deployment instructions and supporting scripts can automatically deploy reference implementations of each Azure landing zone.

Customers truly realize the value of migrations once they have started operating from the cloud. Cloud destinations that enable innovation and agility, while ensuring governance and security are key to accelerate that value realization. Azure landing zones are ready to guide customers and partners in setting-up cloud destinations and, more importantly, for setting-up post-migration success.
Quelle: Azure

NFS 4.1 support for Azure Files is now in preview

Azure Files is a distributed cloud file system serving file system SMB and REST protocols generally available since 2015. Customers love how Azure Files enables them to easily lift and shift their legacy workloads to the cloud without any modifications or changes in technology. SMB works great on both Windows and UNIX operating systems for most use cases. However, because some applications are written for POSIX compliant file systems, our customers wanted to have the same great experience on a fully POSIX compatible NFS file system. Today, it’s our pleasure to announce Azure Files support for NFS v4.1 protocol!

NFS 4.1 support for Azure Files will provide our users with a fully managed NFS file system as a service. This offer is built on a truly distributed resilient storage platform that serves Azure Blobs, Disks, and Queues, to name just a few components of Azure Storage. It is by nature highly available and highly durable. Azure Files also supports full file system access semantics such as strong consistency and advisory byte range locking, and can efficiently serve frequent in-place updates to your data.

Common use cases

Azure Files NFS v4.1 has a broad range of use cases. Most applications written for Linux file systems can run on NFS. Here is a subset of customer use cases we have seen during the limited preview:

Linux application storage:

Shared storage for applications like SAP, storage for images or videos, Internet of Things (IoT) signals, etc. In this context, one of our preview customers said:

“T-Systems is one of the leading SAP outsourcers. We were looking for a highly-performant, highly available, zone redundant Azure native solution to provide NFS file systems for our SAP landscape deployments. We were thrilled so see Azure Files exceeding our performance expectations. We also see a huge cost saving and a reduced complexity compared to other available cloud solutions.”  – Lars Micheel, Head of SAP Solution Delivery and CTO PU SAP.

End user storage:

Shared file storage for end user home directories and home directories for applications like Jupyter Notebooks. Also, some customers used it for lift-and-shift of datacenter NAS data to the cloud in order to reduce the on-premises footprint and expand to more geographic regions with agility. In this context, one of our preview customers said:

“Cloudera is well known for our machine learning capabilities, an industry analyst firm called us a “machine learning – machine” when they named us a leader in a recent report. We needed a high performance NFS file system to match our ML capabilities. Azure Files met all the requirements that Cloudera Machine Learning has for a real filesystem and outperformed all the alternatives. Because it is integrated with the Azure Storage stack, my expectation is that it’s going to be cheaper and far easier to manage than the alternatives as well.”  –  Sean Mackrory, Software Engineer, Cloudera

Container-based applications:

Persistent storage for Docker and Kubernetes environments. We are also launching the preview of CSI driver for Azure files Support for NFS today.

Databases:

Hosting Oracle databases and taking its backups using Recover Manager (RMAN). Azure Files premium tier was purpose-built for database kind of workloads with first parties taking dependencies on it.

Management

You get the same familiar share management experience on Azure Files through Azure portal, PowerShell, and CLI:

Create NFS file share with a few clicks in Azure portal

Security

Azure Files uses AES 256 for encryption at rest. You also have the option to encrypt all of your data using the keys that you own, managed by the Azure Key Vault. Your share can be accessed from within a region, from another region, or from on-premises by configuring secure virtual networks to allow NFS traffic privately between your volume and destination. Data coming to NFS shares has to emerge from a trusted VNet. All access to the NFS share is denied by default unless access is explicitly granted by configuring right network security rules.

Performance

The NFS protocol is available on Azure Files premium tier. Your performance will scale linearly with the provisioned capacity. You can get up to 100K IOPS and 80 Gibps throughput on a single 100 TiB volume.

Backup

Backing up your data on NFS shares can either be orchestrated using familiar tooling like rsync or products from one of our third-party backup partners. Multiple backup partners including Commvault, Veeam, and Veritas were part of our initial preview and have extended their solutions to work with both SMB 3.0 and NFS 4.1 for Azure Files.

Migration

For data migration, you can use standard tools like scp, rsync, or rsync. Because file storage can be accessed from multiple compute instances concurrently, you can improve copying speeds with parallel uploads. If you want to migrate data from outside of a region, use VNet peering, VPN or an ExpressRoute to connect to your file system from another Azure region or your on-premises data center.

Pricing

This offer will be charged based on premier tier pricing. You can provision shares as small as 100GiB and increase your capacity in 1GiB increments. See premium tier pricing on Azure Files pricing page.

Get started

NFS 4.1 support for Azure Files is in a select set of regions today and we will continually add more regions to this list in coming weeks. Get started today by following these simple step-by-step instructions!

Next steps

We would love to hear your feedback as we continue to heavily invest in adding more features and improving the performance of the NFS v 4.1 offer. For direct feedback and inquiries, please email us at: azurefilesnfs@microsoft.com
Quelle: Azure

Azure NetApp Files cross region replication and new enhancements in preview

As businesses continue to adapt to the realities of the current environment, operational resilience has never been more important. As a result, a growing number of customers have accelerated a move to the cloud, using Microsoft Azure NetApp Files to power critical pieces of their IT infrastructure, like Virtual Desktop Infrastructure, SAP applications, and mission-critical databases.

Today, we release the preview of Azure NetApp Files cross region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way, protecting your data from unforeseeable regional failures. We’re also introducing important new enhancements to Azure NetApp Files to provide you with more data security, operational agility, and cost-saving flexibility.

Azure NetApp Files cross region replication

Azure NetApp Files cross region replication leverages NetApp SnapMirror® technology therefore, only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time so you can achieve a smaller Restore Point Objective (RPO).

Over the next few months of Azure NetApp Files cross region replication preview you can expect:

Multiple replication frequency options: you can replicate an Azure NetApp Files, NFS, or SMB volume across regions with replication frequency choice of every 10 minutes, every hour, or once a day.
Read from secondary: you can read from the secondary volume during active replication.
Failover on-demand: you can failover to the secondary volume at a time of your choice. After a failover, you can also resynchronize the primary volume from the secondary volume at a time of your choice.
Monitoring and alerting: you can monitor the health of volume replication and the health of the secondary volume through Azure NetApp Files metrics and receive alerts through Azure Monitor.
Automation: you can automate the configuration and management of Azure NetApp Files volume replication through standard Azure Rest API, SDKs, command-line tools, and ARM templates.

Supported region pairs

Azure NetApp Files cross region replication is available in popular regions from US, Canada, AMEA, and Asia at the start of public preview. Azure NetApp Files documentation will keep you up-to-date with the latest supported region pairs.

Getting started

Join the preview waitlist now. Once your subscription is enabled for the preview, you can find the feature from the portal (Figure 1) and within a few clicks, you'll be able to configure your first Azure NetApp Files cross region replication (Figure 2).

Figure 1: You can add cross region replication by selecting "Add data replication" from Azure NetApp Files volume management view.

Figure 2: Cross region replication is successfully configured for an Azure NetApp Files volume.

Learn more about Azure NetApp Files cross region replication through the Azure NetApp Files documentation.

Learn more about our pricing

During preview, Azure NetApp Files cross region replication will be offered at full price. Pricing information will be available on the Azure NetApp Files pricing page. You can learn more about the Azure NetApp Files cross region replication cost model through the Azure NetApp Files documentation.

Volume snapshot policy

Azure NetApp Files allows you to create point-in-time snapshots of your volumes. Starting now, you can create a snapshot policy to have Azure NetApp Files automatically create volume snapshots at a frequency of your choice. You can schedule the snapshots to be taken in hourly, daily, weekly or monthly cycles. You can also specify the maximum number of snapshots to keep as part of the snapshot policy. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is currently in preview. You can register for the feature preview by following the volume snapshot policy documentation.

Dynamic volume tier change

Cloud promises flexibility in IT spending. You can now change the service level of an existing Azure NetApp Files volume by moving the volume to another capacity pool that uses the service level you want for the volume. This in-place service-level change for the volume does not require that you migrate data. It also does not impact the data plane access to the volume. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is currently in public preview. You can register for the feature preview by following the dynamic volume tier change documentation.

Simultaneous dual-protocol (NFS v3 and SMB) access

You can now create an Azure NetApp Files volume that allows simultaneous dual-protocol (NFS v3 and SMB) access with support for LDAP user mapping. This feature enables use cases where you may have a Linux-based workload that generates and stores data in an Azure NetApp Files volume. At the same time, your staff needs to use Windows-based clients and software to analyze the newly generated data from the same Azure NetApp Files volume. The simultaneous dual-protocol access feature removes the need to copy the workload-generated data to a separate volume with a different protocol for post-analysis, saving storage cost, and operational time. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is generally available. Learn more from the simultaneous dual-protocol access documentation.

NFS v4.1 Kerberos encryption in transit

Azure NetApp Files now supports NFS client encryption in Kerberos modes (krb5, krb5i, and krb5p) with AES-256 encryption, providing you with additional data security. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is generally available. Learn more from the NFS v4.1 Kerberos encryption documentation.

Azure Government regions

Lastly, we’re pleased to announce the general availability of Azure NetApp Files in Azure Government regions, starting with US Gov Virginia, and soon in US Gov Texas, and US Gov Arizona. Take a look at the latest Azure NetApp Files regional availability and region roadmap.

Get it, use it, and tell us about it

As with other previews, the public preview features should not be used for production workloads until they reach general availability.

We look forward to hearing your feedback on these new capabilities. You can email us feedback at ANFFeedback@microsoft.com. As always, we love to hear all of your ideas and suggestions about Azure NetApp Files, which you can post at Azure NetApp Files feedback forum.
Quelle: Azure

Build a scalable security practice with Azure Lighthouse and Azure Sentinel

The Microsoft Azure Lighthouse product group is excited to launch a blog series covering areas in Azure Lighthouse where we are investing to make our service provider partners and enterprise customers successful with Azure. Our first blog in this series covers a top area of consideration for companies worldwide—Security with focus on how Azure Lighthouse can be used alongside Microsoft’s Azure Sentinel service to build an efficient and scalable security practice.

Today, organizations of all sizes are looking to reduce costs, complexity, and gain efficiencies in their security operations. As cloud security solutions help meet these requirements by providing flexibility, simplicity, pay for use, automatic scalability and protection across heterogenous environments, more and more companies are embracing cloud security solutions.

While achieving efficiencies is the need of the hour, organizations are also faced with shortage of security experts in the market.  Here is where there is tremendous potential for service providers to fill this gap by building and offering security services on top of cloud security solutions. Before diving deeper, let me start with a brief introduction to Azure Lighthouse and Azure Sentinel.

Azure Lighthouse helps service providers and large enterprises manage environments of multiple customers or individual subsidiaries, at scale from within their single centralized control plane. Since the launch of Azure Lighthouse at Inspire, Azure Lighthouse has seen wide adoption from both service providers and enterprises, with millions of Azure resources being managed at scale across heterogenous environments.

Azure Sentinel is a cloud native security information event management (SIEM) and security orchestration automated response (SOAR) solution from Microsoft. It enables collection of security data at scale across your entire enterprise including Azure services, Microsoft 365 services or from hybrid environments,from hybrid environments, such as other clouds, firewalls, and partner security tools. Azure Sentinel also uses built-in AI and advanced querying capabilities to detect, investigate, respond to and mitigate threats efficiently.

We will now look at how you can use both these services together to architect a scalable security practice.

To start building a security practice that scales across multiple customer environments for a service provider or helps organizations centrally monitor and manage the security operations across their individual subsidiaries, we recommend using a distributed deployment and centralized management model. This is where you deploy Azure Sentinel workspaces within the tenant that belongs to the customer or subsidiary (data stays locally within the customer’s or individual subsidiary’s environment) and manage it centrally from within a service provider’s or from a central security operations center (SOC) unit’s tenant within an organization.

You can then leverage Azure Lighthouse’s capabilities to manage and perform security operations from the central managing tenant on the Azure Sentinel workspaces located in the managed tenant. To learn more about this model and its applicability for your scenario, read Extend Azure Sentinel across workspaces and tenants.

To deploy and configure these workspaces at scale, both Azure Sentinel and Azure Lighthouse offer powerful automation capabilities that you can use effectively with CI/CD pipelines across tenants. Here is what ITCSecure, Managed Security Services Provider and Microsoft Partner based in London has to say:

“With Azure Lighthouse’s ability to get delegated access to a customer’s environment and the powerful automation capabilities of both Azure Lighthouse and Azure Sentinel, we are now able to leverage a common set of automations to deploy Azure Sentinel. In real terms, this enables us to configure Azure Sentinel with existing content like queries and analytical rules. This has resulted in significant reductions in customer onboarding times, reducing delivery times from months to a few weeks and even a few hours in certain scenarios. This has enabled us to scale our onboarding processes and practices significantly and delivers faster ROI for our customers. Azure Lighthouse has also provided greater transparency and visibility for our customers, where they can clearly see work delivered. We run queries and apply workbooks across our customer’s subscriptions, deploy playbooks in our customer’s tenants, all from a central pane of glass, further adding to the overall speed of delivery of our service.” —Arno Robbertse, Chief Executive, ITC Secure

Threat hunting and investigation through cross-tenant queries

Running queries to search for threats and as a next step investigating them is an essential part of a SOC analyst’s job. With Azure Lighthouse, you can deploy Log Analytics queries or hunting queries in the central managing tenant (preserving IP for a service provider) and run those queries across the managed tenants using the union operator and workspace expression.                     

Visualizing and monitoring data across customer environments

Another technology that works well across tenants is Azure Monitor Workbooks, Azure Sentinel’s dashboarding technology. You can choose to deploy workbooks in the managing tenant or managed tenant per your requirements. For workbooks deployed in the managing tenant, you can add a multi-workspace selector within a workbook (in case it doesn’t have one already built into it), to visualize and monitor data and essentially get data insights across multiple workspaces and across multiple customers/subsidiaries if needed.

Automated responses through playbooks

Security Playbooks can be used for automatic mitigation when an alert is triggered. The playbooks can be deployed either in the managing tenant or the individual managed tenant, with the response procedures configured based on which tenant's users will need to take action in response to a security threat.

Xcellent, a managed services provider and Microsoft partner based in Netherlands has benefited from access to a central security solution powered by Azure Sentinel and Azure Lighthouse, to monitor the different Microsoft 365 components across customer tenants. Response management and querying against their customer base has also become more efficient—dropping Xcellent’s standard response time to less than 45 minutes and allowed the team to create a more proactive security solution for their customers.

Cross-tenant incident management

Multiple workspace incident view facilitates centralized incident monitoring and management across multiple Azure Sentinel workspaces and across Azure Active Directory (Azure AD) tenants using Azure Lighthouse. This centralized incident view lets you manage incidents directly or drill down transparently to the incident details in the context of the originating workspace.

Resources to get you started

Azure Lighthouse extends Azure Sentinel’s powerful security capabilities to help you centrally monitor and manage security operations from a single interface and efficiently scale your security operations across multiple Azure tenants and customers.

The following resources will help you get started:

Take a look at our detailed documentation and guidance for using Azure Lighthouse with Azure Sentinel.
For latest resources and updates on Azure Sentinel, join us at the Azure Sentinel Tech Community.
You can provide feedback or request new features for Azure Lighthouse in our feedback forum.
Check out Azure PartnerZone for latest content, news, and resources for partners.

Quelle: Azure

Prepare and certify your devices for IoT Plug and Play

Developing solutions with Azure IoT has never been faster, easier, or more secure. However, the tight coupling and integration between IoT device software and the software that matches it in the cloud can make it challenging to add different devices without spending hours writing device code.

IoT Plug and Play can solve this by enabling a seamless device-to-cloud integration experience. IoT Plug and Play from Microsoft is an open approach using Digital Twin Definition Language (based on JavaScript Object Notation for Linked Data (JSON-LD)) that allows IoT devices to declare their capabilities to cloud solutions. It enables hardware partners to build devices that can easily integrate with cloud solutions based on Azure IoT Central, as well as third-party solutions built on top of Azure IoT Hub or Azure Digital Twins.

As such, we are pleased to announce that the IoT Plug and Play device certification program is now available for companies to certify and drive awareness of their devices tailored for solutions, while also reducing time to market. In this blog post, we will explore the common ecosystem challenges and business motivations for using IoT Plug and Play, as well as why companies are choosing to pursue IoT Plug and Play certification and the requirements and process involved.

Addressing ecosystem challenges and business needs with IoT Plug and Play

Across our ecosystem of partners and customers, we continue to see opportunities to simplify IoT. Companies are using IoT devices to help them find valuable insights ranging from how customers are using their products, to how they can optimize operations and reduce energy consumption. Yet there are also challenges to enabling these scenarios across energy, agriculture, retail, healthcare, and other industries as integrating IoT devices into cloud solutions can often be a time-consuming process.

Windows solved a similar industry problem with Plug and Play, which at its core, was a capability model that devices could declare and present to Windows when they were connected. This capability model made it possible for thousands of different devices to connect to Windows and be used without any software having to be installed manually on Windows.

IoT Plug and Play—which was announced during Microsoft Build in May 2019—similarly addresses the ecosystem need to declare an open model language through an open approach. IoT Plug and Play is currently available in preview and offers numerous advantages for device builders, solution builders, and customers alike when it comes to reducing solution development time, cost, and complexity. By democratizing device integration, IoT Plug and Play helps remove entry barriers and opens new IoT device use cases. Since IoT Plug and Play-enabled solutions can understand the device model to start using devices without customization, the same interaction model can be used in any industry. For instance, cameras used on the factory floor for inspection can also be used in retail scenarios.

The IoT Plug and Play certification process validates that devices meet core capabilities and are enabled for secure device provisioning. The use of IoT Plug and Play certified devices is recommended in all IoT solutions, even those that do not currently leverage all the capabilities, as migration of IoT Plug and Play-enabled devices is a simple process.

IoT Plug and Play saves partners time and money

IoT Plug and Play-capable devices can become a major business differentiator for device and solution builders. Microsoft partner, myDevices, is already leveraging IoT Plug and Play in their commercial IoT solutions. According to Adrian Sanchez del Campo, Vice President of Engineering, “The main value in IoT Plug and Play is the ease of developing a device that will be used in a connected fashion. It's the easiest way to connect any hardware to the cloud, and it allows for any company to easily define telemetry and properties of a device without writing any embedded code.”

Sanchez del Campo also says it saves time and money. For devices that monitor or serve as a gateway at the edge, IoT Plug and Play enables myDevices to cut their development cycle by half or more, accelerating proofs of concept while also reducing development costs.

Olivier Pauzet, Vice President Product, IoT Solutions, from Sierra Wireless agrees that IoT Plug and Play is a definite time and money saver. “IoT Plug and Play comes on top of the existing partnership and joint value brought by Sierra Wireless’s Octave all-in-one-edge-to-cloud solution and Azure IoT services,” says Pauzet. “For customers using Digital Twins or IoT Central, being able to leverage IoT Plug and Play on both Octave and Azure will expand capabilities while making solution development even faster and easier.”

In addition to faster time to market, IoT Plug and Play also provides benefits for simplifying solution development. “As a full edge-to-cloud solution provider, Sierra Wireless sees benefits in making customer devices reported through Octave cloud connectors compatible with IoT Plug and Play applications,” says Pauzet. “Making it even simpler for customers and system integrators to build reliable, secure, and flexible end-to-end solutions is a key benefit for the whole ecosystem.”

Benefits of IoT Plug and Play device certification from Microsoft

Achieving IoT Plug and Play certification offers multiple advantages, but at its core, the benefits revolve around device builders having confidence that their tailored devices will be more discoverable, be more readily promoted to a broader audience, and have a reduced time to market.

Once a device is IoT Plug and Play-certified, it can easily be used in any IoT Plug and Play-enabled solution which increases the market opportunity for device builders. IoT Plug and Play-certified devices are also surfaced to a worldwide audience, helping solution builders discover devices with the capabilities they need at a previously unreachable scale.

It also provides device builders with the opportunity to easily partner with other providers who have adopted the same open approach to create true end-to-end solutions. Plus, devices can be deployed in various solutions without a direct relationship between the device builder and solution builder, increasing your addressable market.

Device builders gain additional audience exposure and potential co-sell opportunities by getting IoT Plug and Play-certified devices featured and promoted in the Certified for Azure IoT device catalog. The catalog provides expanded opportunities to reach solution developers and device buyers, who can search for compatible devices.

Finally, IoT Plug and Play-certified devices appeal to solution builders because they enable time to value by simplifying and reducing the solution development cycle. IoT Plug and Play also gives extensibility to IoT Plug and Play-enabled solutions by enabling the seamless addition of more devices.

Achieving IoT Plug and Play certification

To achieve IoT Plug and Play certification from Microsoft, devices must meet the following requirements:

Defined device models and compliance with the Digital Twin Definition Language (DTDL) version 2.
Support Device Provisioning Services (DPS).
Physical device review.

The certification process is comprised of three phases: develop, certify, and publish. Develop phase activities include modeling and developing the code, storing the device models, and then iterating and testing the code. The outcome is a finalized device code that is ready to go through the IoT Plug and Play certification phase.

Certify phase activities require Microsoft Partner Network membership and onboarding to the Azure Certified Device submission portal. To kick off the certification process, developers must submit their IoT Plug and Play device model to the portal, along with relevant marketing details. Once complete, developers can connect and test in the certification portal, which takes the device through an automated set of validation tests.

Upon IoT Plug and Play certification, the device becomes eligible for publication to the Certified for Azure IoT device catalog. Publish phase activities include submitting the test results, device metadata, and Get Started Guide, along with the desired publish date, to Microsoft. Microsoft will work with the device builder to coordinate additional physical device review after the device is published.

Get started on IoT Plug and Play certification

Now is the right time to get ahead of the coming groundswell for IoT Plug and Play certification and begin maximizing your business potential. Begin the certification process by watching this video on how to certify IoT Plug and Play devices. For questions, reach out to us via email at IoT Certification.

For those considering device certification beyond IoT Plug and Play, stay tuned for future enhancements that will be announced soon. In the meantime, be sure to explore Azure IoT resources, including technical guidance, how-to guides, Microsoft Tech Community, and more.

Additional resources include:

IoT Plug and Play preview blog.
IoT Plug and Play documentation.
Certification tools:

Command line.
Azure Certified Device submission portal.

Azure Certified for IoT device catalog.
IoT Show for IoT Plug and Play.
IoT Plug and Play certification tutorial.

Quelle: Azure

Empowering remote learning with Azure Cognitive Services

This blog post was co-authored by Anny Dow, Product Marketing Manager, Azure Cognitive Services.

As schools and organizations around the world prepare for a new school year, remote learning tools have never been more critical. Educational technology, and especially AI, has a huge opportunity to facilitate new ways for educators and students to connect and learn.

Today, we are excited to announce the general availability of Immersive Reader, and shine a light on how new improvements to Azure Cognitive Services can help developers build AI apps for remote education that empower everyone.

Make content more accessible with Immersive Reader, now generally available

Immersive Reader is an Azure Cognitive Service within the Azure AI platform that helps readers read and comprehend text. Through today’s general availability, developers and partners can add Immersive Reader right into their products, enabling students of all abilities to translate in over 70 languages, read text aloud, focus attention through highlighting, other design elements, and more. 

Immersive Reader has become a critical resource for distance learning, with more than 23 million people every month using the tool to improve their reading and writing comprehension. Between February and May 2020, when many schools moved to a distance learning model, we saw a 560 percent increase in Immersive Reader usage. As the education community embarks on a new school year in the Fall, we expect to see continued momentum for Immersive Reader as a tool for educators, parents, and students.

With the general availability of Immersive Reader, we are also rolling out the following enhancements:

Immersive Reader SDK 1.1: Updates include support to have a page read aloud automatically, pre-translating content, and more. Learn about SDK updates.
New Neural Text-to-Speech (TTS) languages: Immersive Reader is adding 15 new Neural Text to Speech voices, enabling students to have content read aloud in even more languages. Learn about the new Neural Text to Speech languages.
New Translator languages: Translator is adding five new languages that will also be available in Immersive Reader—Odia, Kurdish (Northern), Kurdish (Central), Pashto, and Dari. Learn about the latest Translator languages.

Today, we’re adding new partners who are integrating Immersive Reader to make content more accessible, Code.org and SAFARI Montage. 

Code.org is a nonprofit dedicated to expanding access to computer science in schools. To ensure that students of all backgrounds and abilities can access their resources and course content, Code.org is integrating Immersive Reader into their platform.

“We’re thrilled to partner with Microsoft to bring Immersive Reader to the Code.org community. The inclusive capabilities of Immersive Reader to improve reading fluency and comprehension in learners of varied backgrounds, abilities, and learning styles directly aligns with our mission to ensure every student in every school has the opportunity to learn computer science.” – Hadi Partovi, Founder and CEO of Code.org

SAFARI Montage, a leading learning object repository, is integrating Immersive Reader to make it possible for students of any language background or accessibility needs to engage with content, and enable families who don’t speak the language of instruction to be more involved in their students’ learning journeys.  

"Immersive Reader is a crucial support for CPS students and families. During remote learning, particularly for our younger learners, student learning is often supported by parents, guardians, or other caregivers. Since Immersive Reader can be used to translate the student-facing instructions in our digital curriculum, families can support student learning in over 80 languages, making digital learning far more equitable and accessible than ever before! In addition, read-aloud and readability supports are game-changers for diverse learners" – Giovanni Benincasa, UX Manager, Department of Curriculum, Instruction, and Digital Learning, Chicago Public Schools  

With Immersive Reader, all it takes is a single API call to help users boost literacy. To start exploring how to integrate Immersive Reader into your app or service, check out these resources: 

Software Development Kit (SDK): Immersive Reader SDK. 
Documentation: Immersive Reader documentation. 
Getting started videos: Immersive Reader videos .

To see the growing list of Immersive Reader partners and learn more, check out our partners page and Immersive Reader education blog.

Bring online courses to life with speech-enabled apps

With the shift to remote learning, another challenge that educators may face is continuing to drive student engagement.

Text to Speech, a Speech service feature that allows users to convert text to lifelike audio can facilitate new ways for students to interact with content. In addition to powering features like Read Aloud in Immersive Reader and the Microsoft Edge browser, Text to Speech enables developers to build apps that speak naturally in over 110 voices with more than 45 languages and variants.

With the Audio Content Creation tool, users can more easily bring audiobooks to life and finetune audio characteristics like voice style, rate, pitch, and pronunciation to fit their scenarios—no code required. Voices can even be customized for specific characters or personas; the Custom Neural Voice capability makes it possible to build one-of-a-kind voices, starting with 30 minutes of audio. Duolingo, for example, is using the Custom Neural Voice capability to create unique voices to represent different characters in its language courses.

To learn more about how to start creating speech-enabled apps for remote learning, check out the technical Text to Speech blog and other resources:

Demos: Text to Speech. 
Documentation: Text to Speech.
Software Development Kit (SDK): Speech SDK.

Improve productivity and accessibility with transcription and voice commands 

AI can also be a useful tool for more seamless note-taking, making it possible for students and teachers to type with their voice. Transcribe in Word uses Speech to Text in Azure Cognitive Services to automatically transcribe your conversations. Now with speaker diarization, you can get a transcript that identifies who said what, when. 

In addition, adding voice enables more seamless experiences in Microsoft 365. For students who have difficulties writing things down, they can use AI-powered tools in Office not just for dictation but also for controls such as adding, formatting, editing, and organizing text. Word uses Language Understanding, an Azure Cognitive Service that enables you to add custom natural language understanding to your apps, to make it possible to capture ideas easily. To learn more about Language Understanding and how it is powering voice commands, check out our Language Understanding blog.

For more details on how AI is powering experiences in Microsoft 365, read the Microsoft 365 blog.

Get started today

We can’t wait to see what you’ll build. Get started today with Azure Cognitive Services and an Azure free account.
Quelle: Azure

Microsoft and TSMC announce Joint Innovation Lab to accelerate silicon design on Azure

Right now, every industry and every customer is going through a massive transformation, and cloud computing is often a central enabler of this. The silicon industry is also experiencing change as a critical part of the fast-growing cloud computing ecosystem—an ecosystem where silicon and chip design workloads must meet the rising bar for performance, complexity, and perceived costs. In Azure, we focus on the needs of the semiconductor design industry, so we can solve problems and provide solutions for our customers. With our partners at TSMC, we share the belief that using the cloud for silicon design will be a competitive advantage for those that embrace it. 

Through our deep partnership, we have worked closely to implement an Azure-based architecture for TSMC’s Virtual Design Environment, refined cloud resource selection and storage architectures for specific workloads, and demonstrated cost versus performance optimizations for scalable workloads. This requires both new virtual machine (VM) types most suitable for EDA (Electronics Design Automation) workloads, and a cloud-optimized design solution that fully utilizes EDA parallelism. Starting from our collaboration with TSMC and its EDA ecosystem partners, we have jointly achieved multiple breakthroughs in both areas.

Microsoft and TSMC support cloud adoption in the silicon industry

To continue this momentum, Microsoft launches the Joint Innovation Lab with TSMC to serve as a collaboration platform to best integrate cloud and EDA innovations, and help provide the semiconductor industry with the performance and cost effectiveness to accelerate time-to-market and optimize development cost to unleash product innovations.

“Nurturing ecosystem collaboration has been the core of TSMC Open Innovation Platform® (OIP), and this Joint Innovation Lab with Microsoft is one big step forward elevating cross-industry partnership to the next level. TSMC has been one of the earliest drivers of cloud to speed up design enablement for customers since 2018. Through our collaboration with Cloud Alliance members, we can lower entry barriers of Cloud adoption for our common customers and help customers conduct IC design securely in the Cloud and achieve faster time-to-market. Microsoft has been a great partner, and its Silicon on Azure team shares a similar vision with us.” – Dr. Cliff Hou, Senior Vice President of Technology Development at TSMC

Security is foundational for Azure, and we are one of the first cloud service providers certified by TSMC. In addition to our investments of over $1 billion a year on cybersecurity and over 3,500 engineers dedicated to security, we continue to focus on the needs of the semiconductor industry to protect their intellectual property. Building on top of the security foundation, the Joint Innovation Lab aims at hosting in-depth collaborations among ecosystem partners to drive new solutions and transform IC design in the Cloud:

Next-generation VM types: We are working to optimize new VM types in all aspects of CPU performance, number of cores, memory-to-core ratio, and local storage, combined with the most effective storage options, targeting EDA workloads of highly complicated IC designs enabled by the most advanced process technologies.
Cloud-optimized EDA solutions: We are working to create cloud-optimized design solutions, tools, and methodologies combined, to fully utilize EDA parallelism. With massive computational power in the cloud lifting in-house compute limitation, it opens up a brand-new category of EDA optimization to fully explore parallelism opportunities.

TSMC and Microsoft have worked together since 2018 when TSMC announced its OIP Cloud Alliance and OIP Virtual Design Environment (OIP VDE), to enable cloud-powered silicon design approaches using Azure that have demonstrated significant design cycle improvements. Using the vast computational power of Azure services and TSMC’s silicon expertise, we have been able to optimize production runs around the world. It is not uncommon for EDA design jobs to take months due to in-house compute limitations. With access to Azure’s burstable resources to scale to tens of thousands of cores quickly, silicon designers are now able to achieve much faster time to market with improved efficiencies while dealing with their surge demand.

The Silicon on Azure team, led by Mujtaba Hamid, has always taken a broad industry approach to silicon design on Azure, and both our teams are committed to sharing what we learn with the semiconductor design community. Most recently, our collaboration led to two technical whitepapers available through TSMC-Online, outlining optimal cloud usage to speed up mission-critical timing sign-off jobs in a big way, while achieving cost optimization at the same time.

I am very excited about the Joint Lab effort and look forward to jointly engaging with all our partners. Stay tuned for more details and announcements coming out of this collaboration.

For further details on running silicon workloads on Azure, visit our Azure high-performance computing (HPC) for silicon page. To contact the team, please email Azure for Silicon. 
Quelle: Azure