Azure Blob Storage lifecycle management generally available

Data sets have unique lifecycles. Some data is accessed often early in the lifecycle, but the need for access drops drastically as the data ages. Some data remains idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation while other data sets are actively read and modified throughout their lifetimes.

Today we are excited to share the general availability of Blob Storage lifecycle management so that you can automate blob tiering and retention with custom defined rules. This feature is available in all Azure public regions.

Lifecycle management

Azure Blob Storage lifecycle management offers a rich, rule-based policy which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle.

Lifecycle management policy helps you:

Transition blobs to a cooler storage tier such as hot to cool, hot to archive, or cool to archive in order to optimize for performance and cost
Delete blobs at the end of their lifecycles
Define up to 100 rules
Run rules automatically once a day
Apply rules to containers or specific subset of blobs, up to 10 prefixes per rule

To learn more visit our documentation, “Managing the Azure Blob storage Lifecycle.”

Example

Consider a data set that is accessed frequently during the first month, is needed only occasionally for the next two months, is rarely accessed afterwards, and is required to be expired after seven years. In this scenario, hot storage is the best tier to use initially, cool storage is appropriate for occasional access, and archive storage is the best tier after several months and before it is deleted seven years later.

The following sample policy manages the lifecycle for such data. It applies to block blobs in container “foo”:

Tier blobs to cool storage 30 days after last modification
Tier blobs to archive storage 90 days after last modification
Delete blobs 2,555 days (seven years) after last modification
Delete blob snapshots 90 days after snapshot creation

{
"rules": [
{
"name": "ruleFoo",
"enabled": true,
"type": "Lifecycle",
"definition": {
"filters": {
"blobTypes": [ "blockBlob" ],
"prefixMatch": [ "foo" ]
},
"actions": {
"baseBlob": {
"tierToCool": { "daysAfterModificationGreaterThan": 30 },
"tierToArchive": { "daysAfterModificationGreaterThan": 90 },
"delete": { "daysAfterModificationGreaterThan": 2555 }
},
"snapshot": {
"delete": { "daysAfterCreationGreaterThan": 90 }
}
}
}
}
]
}

Pricing

Lifecycle management is free of charge. Customers are charged the regular operation cost for the “List Blobs” and “Set Blob Tier” API calls initiated by this feature. To learn more about pricing visit the Block Blob pricing page.

Next steps

We are confident that Azure Blob Storage lifecycle management policy will simplify your cloud storage management and cost optimization strategy. We look forward to hearing your feedback on this feature and suggestions for future improvements through email at DLMFeedback@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.
Quelle: Azure

Happy birthday to managed Open Source RDBMS services in Azure!

March 20, 2019 marked the first anniversary of general availability for our managed Open Source relational database management system (RDBMS) services, including Azure Database for PostgreSQL and Azure Database for MySQL. A great year of learning and improvements lays behind us, and we are looking forward to an exciting future!

Thank you to all our customers, who have trusted Azure to host their Open Source Software (OSS) applications with MySQL and PostgreSQL databases. We are very grateful for your support and for pushing us to build the best managed services in the cloud!

It’s amazing to the see the variety of mission critical applications that customers run on top of our services. From line of business applications over real-time event processing to internet of things applications, we see all possible patterns running across our different OSS RDBMS offerings. Check out some great success stories by reading our case studies! It’s humbling to see the trust our customers put in the platform! We love the challenges posed by this variety of use cases, and we are always hungry to learn and provide even more enhanced support.

We wouldn’t have reached this point without ongoing feedback and feature requests from our customers. There have been asks for functionality such as read replicas, greater performance, extended regional coverage, additional RDBMS engines like MariaDB, and more. In response, over the year since our services became generally available, we have delivered features and functionality to address these asks. Just check out some of the announcements we have made over the past year:

Latest updates to Azure Database for MySQL – July 2018
Latest updates to Azure Database for PostgreSQL – July 2018
Latest updates to Open Source Database Services for Azure – September 2018 (Ignite)
Announcing the general availability of Azure Database for MariaDB – December 2018
Read Replicas for Azure Database for PostgreSQL in public preview – January 2019
Scaling out read workloads in Azure Database for MySQL – March 2019
Service update announcements for:

Azure Database for MySQL
Azure Database for PostgreSQL
Azure Database for MariaDB

We also want to enable customers to focus on using these features when developing their applications. To that end, we are constantly enhancing our compliance certification portfolio to address a broader set of standards. This gives customers the peace of mind, knowing that our services are increasingly safe and secure. We have also introduced features such as Threat Protection (MySQL, PostgreSQL) and Intelligent Performance (PostgreSQL) to the OSS RDBMS services, so there are two fewer things to worry about!

Open Source is all about the community and the ecosystem built around the Open Source products delivered by the community. We want to bring this goodness to our platform and support it so that customers can leverage the benefits when using our managed services. For example, we have recently announced support for GraphQL with Hasura and TimescaleDB! However, we want to be more than a consumer and make significant contributions to the community. Our first major contribution was the release of the Open Source Azure Data Studio with support for PostgreSQL.

While we are proud to highlight these developments, we also understand that we are still at the outset of the journey. We have a lot of work to do and many challenges to overcome, but we are continuing to move ahead at full steam. We are very thrilled to have Citus Data joining the team, and you can expect to see a lot of focus on enabling improved performance, greater scale, and more built-in intelligence. Find more information about this acquisition by visiting the blog post, “Microsoft and Citus Data: Providing the best PostgreSQL service in the cloud.”

Next steps

In the interim, be sure to take advantage of the following, helpful resources.

Azure Database for PostgreSQL

Performance best practices for using Azure Database for PostgreSQL – Connection Pooling
Performance troubleshooting using new Azure Database for PostgreSQL features
Performance updates and tuning best practices for using Azure Database for PostgreSQL
Best practices for alerting on metrics with Azure Database for PostgreSQL monitoring
Securely monitoring your Azure Database for PostgreSQL Query Store

Azure Database for MySQL

Best practices for alerting on metrics with Azure Database for MySQL monitoring

Azure Database for MariaDB

Best practices for alerting on metrics with Azure Database for MariaDB monitoring

We look forward to continued feedback and feature requests from our customers. More than ever, we are committed to ensuring that our OSS RDBMS services are top-notch leaders in the cloud! Stay tuned, as we have a lot more in the pipeline!
Quelle: Azure

Analysis of network connection data with Azure Monitor for virtual machines

Azure Monitor for virtual machines (VMs) collects network connection data that you can use to analyze the dependencies and network traffic of your VMs. You can analyze the number of live and failed connections, bytes sent and received, and the connection dependencies of your VMs down to the process level. If malicious connections are detected it will include information about those IP addresses and threat level. The newly released VMBoundPort data set enables analysis of open ports and their connections for security analysis.

To begin analyzing this data, you will need to be on-boarded to Azure Monitor for VMs.

Workbooks

If you would like to start your analysis with a prebuilt, editable report you can try out some of the Workbooks we ship with Azure Monitor for VMs. Once on-boarded you navigate to Azure Monitor and select Virtual Machines (preview) from the insights menu section. From here, you can navigate to the Performance or Map tab to see a link for View Workbook that will open the Workbook gallery which includes the following Workbooks that analyze our network data:

Connections overview
Failed connections
TCP traffic
Traffic comparison
Active ports
Open ports

These editable reports let you analyze your connection data for a single VM, groups of VMs, and virtual machine scale sets.

Log Analytics

If you want to use Log Analytics to analyze the data, you can navigate to Azure Monitor and select Logs to begin querying the data. The logs view will show the name of the workspace that has been selected and the schema within that workspace. Under the ServiceMap data type you will find two tables:

VMBoundPort
VMConnection

You can copy and paste the queries below into the Log Analytics query box to run them. Please note, you will need to edit a few of the examples below to provide the name of a computer that you want to query.

Common queries

Review the count of ports open on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.

VMBoundPort
| where Ip != "127.0.0.1"
| summarize by Computer, Machine, Port, Protocol
| summarize OpenPorts=count() by Computer, Machine
| order by OpenPorts desc

List the bound ports on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.

VMBoundPort
| distinct Computer, Port, ProcessName

Analyze network activity by port to determine how your application or service is configured.

VMBoundPort
| where Ip != "127.0.0.1"
| summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
| project-away TimeGenerated
| order by Machine, Computer, Port, Ip, ProcessName

Bytes sent and received trends for your VMs.

VMConnection
| summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer
| order by Computer desc
//| limit 5000
| render timechart

If you have a lot of computers in your workspace, you may want to uncomment the limit statement in the example above. You can use the chart tools to view either bytes sent or received, and to filter down to specific computers.

Connection failures over time, to determine if the failure rate is stable or changing.

VMConnection
| where Computer == <replace this with a computer name, e.g. ‘acme-demo’>
| extend bythehour = datetime_part("hour", TimeGenerated)
| project bythehour, LinksFailed
| summarize failCount = count() by bythehour
| sort by bythehour asc
| render timechart

Link status trends, to analyze the behavior and connection status of a machine.

VMConnection
| where Computer == <replace this with a computer name, e.g. ‘acme-demo’>
| summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h)
| render timechart

Getting started with log queries in Azure Monitor for VMs

To learn more about Azure Monitor for VMs, please read our overview, “What is Azure Monitor for VMs (preview).” If you are already using Azure Monitor for VMs, you can find additional example queries in our documentation for querying data with Log Analytics.
Quelle: Azure

Azure Stack IaaS – part six

Pay for what you use

In the virtualization days I used to pad all my requests for virtual machines (VM) to get the largest size possible. Since decisions and requests took time, I would ask for more than I required just so I wouldn’t have delays if I needed more capacity. This resulted in a lot of waste and a term I heard often–VM sprawl.

The behavior is different with Infrastructure-as-a-Service (IaaS) VMs in the cloud. A fundamental quality of a cloud is that it provides an elastic pool for your resource to use when needed. Since you only pay for what you use, you don’t need to over provision. Instead, you can optimize capacity based on demand. Let me show you some of the ways you can do this for your IaaS VMs running in Azure and Azure Stack.

Resize

It’s hard to know exactly how big your VM should be. There are so many dimensions to consider, such as CPU, memory, disks, and network. Instead of trying to predict what your VM needs for the next year or even month, why not take a guess, let it run, and then adjust the size once you have some historical data.

Azure and Azure Stack makes it easy for you to resize your VM from the portal. Pick the new size and you’re done. No need to call the infrastructure team and beg for more capacity. No need to over spend for a huge VM that isn’t even used.

Learn more:

Resize an Azure Virtual Machine
Azure Virtual Machine sizes
Azure Stack Virtual Machine sizes

Scale out

Another dimension of scale is to make multiple copies of identical VMs to work together as a unit. When you need more, create additional VMs. When you need less, remove some of the VMs. Azure has a feature for this called Virtual Machine Scale Sets (VMSS) which is also available in Azure Stack. You can create a VMSS with a wizard. Fill out the details of how the VM should be configured, including which extensions to use and which software to load onto your VM. Azure takes care of wiring the network, placing the VMs behind a load balancer, creating the VMs, and running the in guest configuration.

Once you have created the VMSS, you can scale it up or down. Azure automates everything for you. You control it like IaaS, but scale it like PaaS. It was never this easy in the virtualization days.

Learn more:

Azure Virtual Machine Scale Sets
Virtual Machine Scale Sets in Azure Stack

Add, remove, and resize disk

Just like virtual machines in the cloud, storage is pay per use. Both Azure and Azure Stack make it easy for you to manage the disks running on that storage so you only need to use what your application requires. Adding, removing, and resizing data disks is a self-service action so you can right-size your VM’s storage based on your current needs.

Learn more:

Add a disk to an Azure Virtual Machine
Remove a disk on an Azure Virtual Machine
Resize a disk of an Azure Virtual Machine

Usage based pricing

Just like Azure, Azure Stack prices are based on how much you use. Since you take on the hardware and operating costs, Azure Stack service fees are typically lower than Azure prices. Your Azure Stack usage will show up as line items in your Azure bill. If you run your Azure Stack in a network which is disconnected from the Internet, Azure Stack offers a yearly capacity model.

Pay-per-use really benefits Azure Stack customers. For example, one organization runs a machine learning model once a month. It takes about one week for the computation. During this time, they use all the capacity of their Azure Stack, but for the other three weeks of the month, they run light, temporary workloads on the system. A later blog will cover how automation and infrastructure-as-code allows you to quickly set this up and tear it down, allowing you to just use what the app needs in the time window it’s needed. Right-sizing and pay-per-use saves you a lot of money.

Learn more:

Microsoft Azure Stack packaging and pricing

In this blog series

We hope you come back to read future posts in this blog series. Here are some of our past and upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Protect your stuff
Fundamentals of IaaS
Do it yourself
It takes a team
If you do it often, automate it
Build on the success of others
Journey to PaaS

Quelle: Azure

Azure Sphere ecosystem accelerates innovation

The Internet of Things (IoT) promises to help businesses cut costs and create new revenue streams, but it also brings an unsettling amount of risk. No one wants a fridge that gets shut down by ransomware, a toy that spies on children, or a production line that’s brought to a halt through an entry point in a single hacked sensor.

So how can device builders bring a high level of security to the billions of network-connected devices expected to be deployed in the next decade?

It starts with building security into your IoT solution from the silicon up. In this piece, I will discuss the holistic device security of Azure Sphere, as well as how the expansion of the Azure Sphere ecosystem is helping to accelerate the process of taking secure solutions to market. For additional partner-delivered insights around Azure Sphere, view the Azure Sphere Ecosystem Expansion Webinar.

A new standard for security

Small, lightweight microcontrollers (or MCUs) are the most common class of computer, powering everything from appliances to industrial equipment. Organizations have learned that security for their MCU-powered devices is critical to their near-term sales and to the long-term success of their brands (one successful attack can drive customers away from the affected brand for years). Yet predicting which devices can endure attacks is difficult.

Through years of experience, Microsoft has learned that to be highly secured, a connected device must possess seven specific properties:

Hardware-based root of trust: The device must have a unique, unforgeable identity that is inseparable from the hardware.
Small trusted computing base: Most of the device's software should be outside a small trusted computing base, reducing the attack surface for security resources such as private keys.
Defense in depth: Multiple layers of defense mean that even if one layer of security is breached, the device is still protected.
Compartmentalization: Hardware-enforced barriers between software components prevent a breach in one from propagating to others.
Certificate-based authentication: The device uses signed certificates to prove device identity and authenticity.
Renewable security: Updated software is installed automatically and devices that enter risky states are always brought into a secure state.
Failure reporting: All device failures, which could be evidence of attacks, are reported to the manufacturer.

These properties work together to keep devices protected and secured in today's dynamic threat landscape. Omitting even one of these seven properties can leave devices open to attack, creating situations where responding to security events is difficult and costly. The seven properties also act as a practical framework for evaluating IoT device security.

How Azure Sphere helps you build secure devices

Azure Sphere – Microsoft’s end-to-end solution for creating highly-secure, connected devices – delivers these seven properties, making it easy and affordable for device manufacturers to create devices that are innately secure and prepared to meet evolving security threats. Azure Sphere introduces a new class of MCU that includes built-in Microsoft security technology and connectivity and the headroom to support dynamic experiences at the intelligent edge.

Multiple levels of security are baked into the chip itself. The secured Azure Sphere OS runs on top of the hardware layer, only allowing authorized software to run. The Azure Sphere Security Service continually verifies the device's identity and authenticity and keeps its software up to date. Azure Sphere has been designed for security and affordability at scale, even for low-cost devices. 

Opportunities for ecosystem expansion

In today’s world, device manufacturing partners view security as a necessity for creating connected experiences. The end-to-end security of Azure Sphere creates a potential for significant innovation in IoT. With a turnkey solution that helps prevent, detect, and respond to threats, device manufacturers don’t need to invest in additional infrastructure or staff to secure these devices. Instead, they can focus their efforts on rethinking business models, product experiences, how they serve customers, and how they predict customer needs.

To accelerate innovation, we’re working to expand our partner ecosystem. Ecosystem expansion offers many advantages. It reduces the overall complexity of the final product and speeds time to market. It frees up device builders to expand technical capabilities to meet the needs of customers. Plus, it enables more responsive innovation of feature sets for module partners and customization of modules for a diverse ecosystem. Below we’ve highlighted some partners who are a key part of the Azure Sphere ecosystem.

Seeed Studio, a Microsoft partner that specializes in hardware prototyping, design and manufacturing for IoT solutions, has been selling their MT3620 Development Board since April 2018. They also sell complementary hardware that enables rapid, solder-free prototyping using their Grove system of modular sensors, actuators, and displays. In September 2018, they released the Seeed Grove starter kit, which contains an expansion shield and a selection of sensors. Besides hardware for prototyping, they are going to launch more vertical solutions based on Azure Sphere for the IoT market. In March, Seeed also introduced the MT3620 Mini Dev Board, a lite version of Seeed’s previous Azure Sphere MT3620 Development Kit. Seeed developed this board to meet the needs of developers who need smaller sizes, greater scalability and lower costs.

AI-Link has released the first Azure Sphere module that is ready for mass production. AI-Link is the top IoT module developer and manufacturer in the market today and shipped more than 90 million units in 2018.

Avnet, an IoT solution aggregator and Azure Sphere chips distributor, unveiled their Azure Sphere module and starter kit in January 2019. Avnet will also be building a library of general and application specific Azure Sphere reference designs to accelerate customer adoption and time to market for Azure Sphere devices and solutions.

Universal Scientific Industrial (Shanghai) Co., Ltd. (USI) recently unveiled their Azure Sphere combo module, uniquely designed for IoT applications, with multi-functionality design-in support by standard SDK. Customers can easily migrate from a discrete MCU solution to build their devices based on this module with secured connectivity to the cloud and shorten design cycle.

Learn more about the Azure Sphere ecosystem

To learn more, view the on-demand Azure Sphere Ecosystem Expansion webinar. You’ll hear from each of our partners as they discuss the Azure Sphere opportunity from their own perspective, as well as how you can take full advantage of Azure Sphere ecosystem expansion efforts.

For in-person opportunities to gain actionable insights, deepen partnerships, and unlock the transformative potential of intelligent edge and intelligent cloud IoT solutions, sign up for an in-person IoT in Action event coming to a city near you.
Quelle: Azure

High-Throughput with Azure Blob Storage

I am happy to announce that High-Throughput Block Blob (HTBB) is globally enabled in Azure Blob Storage. HTBB provides significantly improved and instantaneous write throughput when ingesting larger block blobs, up to the storage account limits for a single blob. We have also removed the guesswork in naming your objects, enabling you to focus on building the most scalable applications and not worry about the vagaries of cloud storage.

HTBB demo of 12.5GB/s single blob throughput at Microsoft Ignite

I demonstrated the significantly improved write performance at Microsoft Ignite 2018. The demo application orchestrated the upload of 50,000 32MiB (1,600,000 MiB) block blobs from RAM using Put Block operations to a single blob. When all blocks were uploaded, it sent the block list to create the blob using the Put Block List operation. It orchestrated the upload using four D64v3 worker virtual machines (VMs), each VM writing 25 percent of the block blobs. The total time to upload the object took around 120 seconds which is about 12.5GB/s. Check out the demo in the video below to learn more.

GB+ throughput using a single virtual machine

To illustrate the possible performance using just a single VM, I created a D32v3 VM running Linux in US West2. I stored the files to upload on a local RAM disk to reduce local storage performance affecting the results. I then created the files using the head command with input from /dev/urandom to fill them with random data. Finally I used AzCopy v10 (v10.0.4) to upload the files to a standard storage account in the same region. I ran each iteration 5 times and averaged the time to upload in the table below.

Data set
Time to upload
Throughput

1,000 x 10MB
10 seconds
1.0 GB/s

100 x 100MB
8 seconds
1.2 GB/s

10 x 1GB
8 seconds
1.2 GB/s

1 x 10GB
8 seconds
1.2 GB/s

1 x 100GB
58 seconds
1.7 GB/s

HTBB everywhere

HTBB is active on all your existing storage accounts, and does not require opt-in. It also comes without any extra cost. HTBB doesn’t introduce any new APIs and is automatically active when using Put Block or Put Blob operations over a certain size. The following table lists the minimum required Put Blob or Put Block size to activate HTBB.

Storage Account type
Minimum size for HTBB

StorageV2 (General purpose v2)
>4MB

Storage (General purpose v1)
>4MB

Blob Storage
>4MB

BlockBlobStorage (Premium)
>256KB

Azure Tools and Services supporting HTBB

There is a broad set of tools and services that already support HTBB, including:

AzCopy v10 preview
Azure Data Lake Storage Gen2
Data Box
Azure Data Factory

Conclusion

We’re excited about the throughput improvements and application simplifications High-Throughput Block Blob brings to Azure Blob Storage! It is now available in all Azure regions and automatically active on your existing storage accounts at no extra cost. We look forward to hearing your feedback. To learn more about Blob Storage, please visit our product page.
Quelle: Azure

What’s new in Azure IoT Central – March 2019

In IoT Central, our aim is to simplify IoT. We want to make sure your IoT data drives meaningful actions and visualizations. In this post, I will share new features now available in Azure IoT Central including embedded Microsoft Flow, updates to the Azure IoT Central connector, Azure Monitor action groups, multiple dashboards, and localization support. We also recently expanded Jobs functionality in IoT Central, so you can check out the announcement blog post to learn more.

Microsoft Flow is now embedded in IoT Central

You can now build workflows using your favorite connectors directly within IoT Central. For example, you can build a temperature alert rule that triggers a workflow to send push notifications and SMS all in one place within IoT Central. You can also test and share the workflow, see the run history, and manage all workflows attached to that rule.

Try it out in your IoT Central app by visiting Device Templates in Rules, adding a new action, and picking the Microsoft Flow tile.

Updated Azure IoT Central connector: Send a command and get device actions

With the updated Azure IoT Central connector, you can now build workflows in Microsoft Flow and Azure Logic Apps that can send commands on an IoT device and get device information like the name, properties, and settings values. You can also now build a workflow to tell an IoT device to reboot from a mobile app, and display the device’s temperature setting and location property in a mobile app.

Try it out in Microsoft Flow or Azure Logic Apps by using the Send a command and Get device actions in your workflow.

Integration with Azure Monitor action groups

Azure Monitor action groups are reusable groups of actions that can be attached to multiple rules at once. Instead of creating separate actions for each rule and entering in the recipient’s email address, SMS number, and webhook URL for each, you can choose an action group that contains all three from a drop down and expect to receive notifications on all three channels. The same action group can be attached to multiple rules and are reusable across Azure Monitor alerts.

Try it out in your IoT Central app by visiting Device Templates in Rules, adding a new action, and then pick the Azure Monitor action groups tile.

Multiple dashboards

Users can now create multiple personal dashboards in their IoT Central app! You can now build customized dashboards to better organize your devices and data. The default application dashboard is still available for all users, but each user of the app can create personalized dashboards and switch between them.

Localization support

As of today, IoT Central supports 17 languages! You can select your preferred language in the settings section in the top navigation, and this will apply when you use any app in IoT Central. Each user can have their own preferred language, and you can change it at any time.

With these new features, you can more conveniently build workflows as actions and reuse groups of actions, organize your visualizations across multiple dashboards, and work with IoT Central with your favorite language. Stay tuned for more developments in IoT Central. Until next time!

Next steps

Have ideas or suggestions for new features? Post it on Uservoice.
To explore the full set of features and capabilities and start your free trial, visit the IoT Central website.
Check out our documentation including tutorials to connect your first device.
To give us feedback about your experience with Azure IoT Central, take this survey.
To learn more about the Azure IoT portfolio including the latest news, visit the Microsoft Azure IoT page.

Quelle: Azure

Blob storage interface on Data Box is now generally available

The blob storage interface on the Data Box has been in preview since September 2018 and we are happy to announce that it's now generally available. This is in addition to the server message block (SMB) and network file system (NFS) interface already generally available on the Data Box.

The blob storage interface allows you to copy data into the Data Box via REST. In essence, this interface makes the Data Box appear like an Azure storage account. Applications that write to Azure blob storage can be configured to work with the Azure Data Box in exactly the same way. 

This enables very interesting scenarios, especially for big data workloads. Migrating large HDFS stores to Azure as part of a Apache Hadoop® migration is a popular ask. Using the blob storage interface of the Data Box, you can now easily use common copy tools like DistCp to directly point to the Data Box, and access it as though it was another HDFS file system! Since most Hadoop installations come pre-loaded with the Azure Storage driver, most likely you will not have to make changes to your existing infrastructure to use this capability. Another key benefit of migrating via the blob storage interface is that you can choose to preserve metadata. For more details on migrating HDFS workloads, please review the Using Azure Data Box to migrate from an on premises HDFS store documentation.

Blob storage on the Data Box enables partner solutions using native Azure blob storage to write directly to the Data Box. With this capability, partners like Veeam, Rubrik, and DefendX were able to utilize the Data Box to assist customers moving data to Azure.

For a full list of supported partners please visit the Data Box partner page.

For more details on using blob storage with Data Box, please see our official documentation for Azure Data Box Blob Storage requirements and a tutorial on copying data via Azure Data Box Blob Storage REST APIs.
Quelle: Azure

Announcing Azure Stack HCI: A new member of the Azure Stack family

It has been inspiring to watch how customers use Azure Stack to innovate and drive digital transformation across cloud boundaries. In her blog post today, Julia White shares examples of how customers are using Azure Stack to innovate on-premises using Azure services. Azure Stack shipped in 2017, and it is the only solution in the market today for customers to run cloud applications using consistent IaaS and PaaS services across public cloud, on-premises, and in disconnected environments. While customers love the fact that they can run cloud applications on-premises with Azure Stack, we understand that most customers also run important parts of their organization on traditional virtualized applications. Now we have a new option to deliver cloud efficiency and innovation for these workloads as well.

Today, I am pleased to announce Azure Stack HCI solutions are available for customers who want to run virtualized applications on modern hyperconverged infrastructure (HCI) to lower costs and improve performance. Azure Stack HCI solutions feature the same software-defined compute, storage, and networking software as Azure Stack, and can integrate with Azure for hybrid capabilities such as cloud-based backup, site recovery, monitoring, and more.

Adopting hybrid cloud is a journey and it is important to have a strategy that takes into account different workloads, skillsets, and tools. Microsoft is the only leading cloud vendor that delivers a comprehensive set of hybrid cloud solutions, so customers can use the right tool for the job without compromise. 

Choose the right option for each workload

Azure Stack HCI: Use existing skills, gain hyperconverged efficiency, and connect to Azure

Azure Stack HCI solutions are designed to run virtualized applications on-premises in a familiar way, with simplified access to Azure for hybrid cloud scenarios. This is a perfect solution for IT to leverage existing skills to run virtualized applications on new hyperconverged infrastructure while taking advantage of cloud services and building cloud skills.

Customers that deploy Azure Stack HCI solutions get amazing price/performance with Hyper-V and Storage Spaces Direct running on the most current industry-standard x86 hardware. Azure Stack HCI solutions include support for the latest hardware technologies like NVMe drives, persistent memory, and remote-direct memory access (RDMA) networking.

IT admins can also use Windows Admin Center for simplified integration with Azure hybrid services to seamlessly connect to Azure for:

Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).
Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure – with advanced analytics powered by AI.
Cloud Witness, to use Azure as the lightweight tie breaker for cluster quorum.
Azure Backup for offsite data protection and to protect against ransomware.
Azure Update Management for update assessment and update deployments for Windows VMs running in Azure and on-premises.
Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site VPN.
Azure Security Center for threat detection and monitoring for VMs running in Azure and on-premises (coming soon).

Buy from your choice of hardware partners

Azure Stack HCI solutions are available today from 15 partners offering Microsoft-validated hardware systems to ensure optimal performance and reliability. Your preferred Microsoft partner gets you up and running without lengthy design and build time and offers a single point of contact for implementation and support services. 

Visit our website to find more than 70 Azure Stack HCI solutions currently available from these Microsoft partners: ASUS, Axellio, bluechip, DataON, Dell EMC, Fujitsu, HPE, Hitachi, Huawei, Lenovo, NEC, primeLine Solutions, QCT, SecureGUARD, and Supermicro.

Learn more

We know that a great hybrid cloud strategy is one that meets you where you are, delivering cloud benefits to all workloads wherever they reside. Check out these resources to learn more about Azure Stack HCI and our other Microsoft hybrid offerings:

Register for our Hybrid Cloud Virtual Event on March 28, 2019.
Learn more at our Azure Stack HCI solutions website.
Listen to Microsoft experts Jeff Woolsey and Vijay Tewari discuss the new Azure Stack HCI solutions.

FAQ

What do Azure Stack and Azure Stack HCI solutions have in common?

Azure Stack HCI solutions feature the same Hyper-V based software-defined compute, storage, and networking technologies as Azure Stack. Both offerings meet rigorous testing and validation criteria to ensure reliability and compatibility with the underlying hardware platform.

How are they different?

With Azure Stack, you can run Azure IaaS and PaaS services on-premises to consistently build and run cloud applications anywhere.

Azure Stack HCI is a better solution to run virtualized workloads in a familiar way – but with hyperconverged efficiency – and connect to Azure for hybrid scenarios such as cloud backup, cloud-based monitoring, etc.

Why is Microsoft bringing its HCI offering to the Azure Stack family?

Microsoft’s hyperconverged technology is already the foundation of Azure Stack.

Many Microsoft customers have complex IT environments and our goal is to provide solutions that meet them where they are with the right technology for the right business need. Azure Stack HCI is an evolution of Windows Server Software-Defined (WSSD) solutions previously available from our hardware partners. We brought it into the Azure Stack family because we have started to offer new options to connect seamlessly with Azure for infrastructure management services.

Will I be able to upgrade from Azure Stack HCI to Azure Stack?

No, but customers can migrate their workloads from Azure Stack HCI to Azure Stack or Azure.

How do I buy Azure Stack HCI solutions?

Follow these steps:

Buy a Microsoft-validated hardware system from your preferred hardware partner.
Install Windows Server 2019 Datacenter edition and Windows Admin Center for management and the ability to connect to Azure for cloud services
Option to use your Azure account to attach management and security services to your workloads.

How does the cost of Azure Stack HCI compare to Azure Stack?

This depends on many factors.

Azure Stack is sold as a fully integrated system including services and support. It can be purchased as a system you manage, or as a fully managed service from our partners. In addition to the base system, the Azure services that run on Azure Stack or Azure are sold on a pay-as-you-use basis.

Azure Stack HCI solutions follow the traditional model. Validated hardware can be purchased from Azure Stack HCI partners and software (Windows Server 2019 Datacenter edition with software-defined datacenter capabilities and Windows Admin Center) can be purchased from various existing channels. For Azure services that you can use with Windows Admin Center, you pay with an Azure subscription.

We recommend working with your Microsoft partner or account team for pricing details.

What is the future roadmap for Azure Stack HCI solutions?

We’re excited to hear customer feedback and will take that into account as we prioritize future investments.
Quelle: Azure

Azure Data Box Family Meets Customers at the Edge

Today I am pleased to announce the general availability of Azure Data Box Edge and the Azure Data Box Gateway. You can get these products today in the Azure Portal.

Compute at the edge

We’ve heard your need to bring Azure compute power closer to you – a trend increasingly referred to as edge computing. Data Box Edge answers that call and is an on-premises anchor point for Azure. Data Box Edge can be racked alongside your existing enterprise hardware or live in non-traditional environments from factory floors to retail aisles. With Data Box Edge, there's no hardware to buy; you sign up and pay-as-you-go just like any other Azure service and the hardware is included.

This 1U rack-mountable appliance from Microsoft brings you the following:

Local Compute – Run containerized applications at your location. Use these to interact with your local systems or to pre-process your data before it transfers to Azure.
Network Storage Gateway – Automatically transfer data between the local appliance and your Azure Storage account. Data Box Edge caches the hottest data locally and speaks file and object protocols to your on-premise applications.
Azure Machine Learning utilizing an Intel Arria 10 FPGA – Use the on-board Field Programmable Gate Array (FPGA) to accelerate inferencing of your data, then transfer it to the cloud to re-train and improve your models. Learn more about the Azure Machine Learning announcement.
Cloud managed – Easily order your device and manage these capabilities for your fleet from the cloud using the Azure Portal.

Since announcing Preview at Ignite 2018 just a few months ago, it has been amazing to see how our customers across different industries are using Data Box Edge to unlock some innovative scenarios:

 

Sunrise Technology, a wholly owned division of The Kroger Co., plans to use Data Box Edge to enhance the Retail as a Service (RaaS) platform for Kroger and the retail industry to enable the features announced at NRF 2019: Retail's Big Show, including personalized, never-before-seen shopping experiences like at-shelf product recommendations, guided shopping and more. The live video analytics on Data Box Edge can help store employees identify and address out-of-stocks quickly and enhance their productivity. Such smart experiences will help retailers provide their customers with more personalized, rewarding experiences.

Esri, a leader in location intelligence, is exploring how Data Box Edge can help those responding to disasters in disconnected environments. Data Box Edge will allow teams in the field to collect imagery captured from the air or ground and turn it into actionable information that provides updated maps. The teams in the field can use updated maps to coordinate response efforts even when completely disconnected from the command center. This is critical in improving the response effectiveness in situations like wildfires and hurricanes.

Data Box Gateway – Hardware not required

Data Box Edge comes with a built-in storage gateway. If you don’t need the Data Box Edge hardware or edge compute, then the Data Box Gateway is also available as a standalone virtual appliance that can be deployed anywhere within your infrastructure.

You can provision it in your hypervisor, using either Hyper-V or VMware, and manage it through the Azure Portal. Server message block (SMB) or network file system (NFS) shares will be set up on your local network. Data landing on these shares will automatically upload to your Azure Storage account, supporting Block Blob, Page Blob, or Azure Files. We’ll handle the network retries and optimize network bandwidth for you. Multiple network interfaces mean the appliance can either sit on your local network or in a DMZ, giving your systems access to Azure Storage without having to open network connections to Azure.

Whether you use the storage gateway inside of Data Box Edge or deploy the Data Box Gateway virtual appliance, the storage gateway capabilities are the same.

 

More solutions from the Data Box family

In addition to Data Box Edge and Data Box Gateway, we also offer three sizes of Data Box for offline data transfer:

Data Box – a ruggedized 100 TB transport appliance
Data Box Disk – a smaller, more nimble transport option with individual 8 TB disks and up to 40 TB per order
Data Box Heavy Preview – a bigger version of Data Box that can scale to 1 PB.

All Data Box offline transport products are available to order through the Azure Portal. We ship them to you and then you fill them up and ship them back to our data center for upload and processing. To make Data Box useful for even more customers, we’re enabling partners to write directly to Data Box with little required change to their software via our new REST API feature which has just reached General Availability – Blob Storage on Data Box!

Get started

Thank you for partnering with us on our journey to bring Azure to the edge. We are excited to see how you use these new products to harness the power of edge computing for your business. Here’s how you can get started:

Order Data Box Edge or the Data Box Gateway today via the Azure Portal.
Review server hardware specs on the Data Box Edge datasheet.
Learn more about our family of Azure Data Box products.

Quelle: Azure