Announcing StorSimple Data Transformation preview

There has been a popular and persistent ask from customers to do more with the data that moves to the cloud via StorSimple. For example, use an Azure service such as Azure Media Services, Azure HDInsight, or Azure Machine Learning to process this data. To date, there was no scalable way to do this. That changes today!

We are excited to announce the private preview of StorSimple Data Transformation- a managed service – that allows you to convert your data of interest from StorSimple format to Azure blobs. This in turn opens the data to the rich ecosystem of various Azure services.

Using StorSimple Data Transformation, files that you put into your StorSimple on-premises volumes will be converted to Azure blobs. Each file will reside as a blob in your storage account. Given that StorSimple is often used for storing media content, we have also provided the option of converting your files directly into media assets in your Media Services account. Now, with a few clicks, you can set up jobs that can move your data in a transparent and seamless way. You can initiate these jobs from the Azure Portal, or you can even integrate the APIs in your code (for example, web jobs or worker roles)

Here is a block diagram of how StorSimple Data Transformation enables this solution:

 

Our customers are planning on using this service in interesting ways:

A call center customer uses StorSimple to store the audio recording of all calls. Using StorSimple Data Transformation, they are using Azure Media services to convert call to text, and analyze these calls for silence duration, sentiment and operator performance.
A customer who uses StorSimple for storing video surveillance feeds plans to use this service for face recognition and redaction using Azure Media services.
A customer who stores log files on StorSimple wants to use compute in the cloud to analyze these logs

If you are a StorSimple customer who wants to put their data to work in Azure, sign up for the preview. If you are a Media services or HDInsight customer looking to fulfil your on-premises storage needs while trying to use these solutions in Azure, enroll today!

 
Quelle: Azure

Introducing: Time picker in Application Insights Analytics

A time picker has been added to Analytics in Azure Application Insights so that you can easily set a time range for your queries. The default range is Last 24 hours, but you can easily change that: Just select a different preset range, or simply add your own “where timestamp…” clause to the query (so if your queries already include a time filter, they’ll work exactly as before).
 

Why do we need a time picker?

With mountains of data at hand, Analytics users often want to focus on a specific time frame. The new time picker allows you to zoom in and explore interesting events without manually editing your query. With the coming upgrade of the data retention period to a full 90 days, applying a time filter to narrow down the number of records returned will help identify events’ root causes, as well as investigate their consequences. Time filters are also a good way to make your queries more efficient and get faster results. This is why we’ve decided to introduce the new time picker, bringing time filtering to the front and making it as easy as possible.

What does the time picker do?

The time picker, located next to the familiar Go button, allows you to select a time range to apply to your query, instead of manually typing it again and again. The selected time range will apply to all the queries in the current tab. The default range is Last 24 hours, but you can easily use a different range –  simply select any of the other presets – last 10 minutes, last hour, 2 or 7 days. After executing the query, you can see the applied time range on the results bar.

 
If none of the presets matches your needs, select custom time range to learn how to add your own time filter to queries.

But wait, what about queries that already include a time filter?

No worries. Specifying an explicit time filter in queries remains a very powerful feature. For these cases we added the Set in query mode, selected automatically for queries that include a time filter.

 

What’s next?

This is of course just the beginning. As more users get acquainted with the time picker, it will evolve to support many more use cases, such as selecting the previous month (the whole calendar month preceding the current one) and specific start and end date and time.

As always, feel free to send us your questions or feedback by using one of the following channels:
•  Suggest ideas and vote on our UserVoice page
•  Join the conversation in the Application Insights Community forum
•  Try Application Insights Analytics
Quelle: Azure

Announcing general availability of preview features and new APIs in Azure Search

Today we are announcing the general availability (GA) of several features as well as a new REST API version and .NET SDK that support the new GA features. All GA features and APIs are covered by the standard Azure service level agreement (SLA) and can be used in production workloads.

New GA APIs

Blob Indexer allows you to parse and index text from common file formats such as Office documents, PDF, and HTML. NOTE: CSV and JSON support is still in preview.
Table Indexer enables you to ingest data from an Azure Table store.
Custom Analyzers enable you to take control over the process of lexical analysis that’s performed over the content of your documents and query terms. For example, custom analyzers enable support for Phonetic search. For more information, read more on custom analyzers. 
Analyze API allows you to test built-in and custom analyzers to see how the analyzer breaks text into tokens.
ETag Support allows you to safely update index definitions, indexers, and data sources concurrently.

New REST API Version (2016-09-01) and .NET SDK

The following REST API and .NET SDKs are GA:

Service REST API version (2016-09-01), which includes all of the GA features noted in the previous section.
A .NET SDK equivalent of the Service REST API (2016-09-01).
A .NET SDK for management operations (Microsoft.Azure.Management.Search), which is the first .NET SDK for Search service and api-key management.

The existing preview API (2015-02-28-Preview) is still active and continues to support existing preview features such as moreLikeThis, and CSV and JSON support in Blob indexer. We look forward to your continued feedback on these and other new features as we work to add more GA features.

Learn More

Learn more about the new REST API version, including steps to upgrade from an older version of the REST API to the GA version. Get the download to GA .NET SDK.
Quelle: Azure

Transitioning your StorSimple Virtual Array to the new Azure portal

Today we are announcing the availability of StorSimple Virtual Device Series in the new Azure portal. This release features significant improvement in the user experience. Our customers can now use the new Azure portal to manage the StorSimple Virtual Array configured as a NAS (SMB) or a SAN (iSCSI) in a remote office/branch office.

If you are using the StorSimple Virtual Device Series, you will be seamlessly transitioned into the new Azure portal with no downtime. We&;ll reach out to you via email regarding the specifics of the dates of the transition. After the transition is complete, you can no longer manage your transitioned virtual array from the classic Azure portal.

If you are using StorSimple Physical Device Series, you can continue to manage your devices via the classic Azure portal.

Learn how to use the new Azure portal in just a few steps as detailed below.

Navigate the new Azure portal

Everything about the StorSimple Virtual Device Series experience in new Azure portal is designed to be easy. In the new Azure portal, you will find your service as StorSimple Device Manager.

The Quick start gives a concise summary of how to setup a new virtual array. This is now available as an option in the left-pane of your StorSimple Device Manager blade.

The StorSimple Device Manager service summary blade is redesigned to make it simple. Use the Overview from the left-pane to navigate to your service summary.

Click on Devices in your summary blade to navigate to all the devices registered to your service. For specific monitoring requirements, your can even customize your dashboards.

Click on a device to go to the Device summary blade. Use the commands in the top menu bar to provision a new share, take a backup, fail over, deactivate, or delete a device. You can also right-click and use the context menu to perform the same operations.

The Jobs, Alerts, Backup catalogs, Device Configuration blades are all redesigned to ensure ease of access.

For more information, go to StorSimple product documentation. Visit StorSimple MSDN forum to find answers, ask questions and connect with the StorSimple community. Your feedback is important to us, so send all your feedback or any feature requests using the StorSimple User Voice. And don’t worry – if you need any assistance, Microsoft Support is there to help you along the way!
Quelle: Azure

From “A PC on every desktop” to “Deep Learning in every software”

Deep learning is behind many recent breakthroughs in Artificial Intelligence, including speech recognition, language understanding and computer vision. At Microsoft, it is changing customer experience in many of our applications and services, including Cortana, Bing, Office 365, SwiftKey, Skype Translate, Dynamics 365, and HoloLens. Deep learning-based language translation in Skype was recently named one of the 7 greatest software innovations of the year by Popular Science, and the technology helped us achieve human-level parity in conversational speech recognition. Deep learning is now a core feature of development platforms such as the Microsoft Cognitive Toolkit, Cortana Intelligence Suite, Microsoft Cognitive Services, Azure Machine Learning, Bot Framework, and the Azure Bot Service. I believe that the applications of this technology are so far reaching  that “Deep Learning in every software” will be a reality within this decade.

We’re working very hard to empower developers with AI and Deep Learning, so that they can make smarter products and solve some of the most challenging computing tasks. By vigorously improving algorithms, infrastructure and collaborating closely with our partners like NVIDIA, OpenAI and others and harnessing the power of GPU-accelerated systems, we’re making Microsoft Azure the fastest, most versatile AI platform – a truly intelligent cloud.

Production-Ready Deep Learning Toolkit for Anyone

The Microsoft Cognitive Toolkit (formerly CNTK) is our open-source, cross-platform toolkit for learning and evaluating deep neural networks. The Cognitive Toolkit expresses arbitrary neural networks by composing simple building blocks into complex computational networks, supporting all relevant network types and applications. With the state-of-the art accuracy and efficiency, it scales to multi-GPU/multi-server environments. According to both internal and external benchmarks, the Cognitive Toolkit continues to outperform other Deep Learning frameworks in most tests and unsurprisingly, the latest version is faster than the previous releases, especially when working on massively big data sets and on Pascal GPUs from NVIDIA. That’s true for single-GPU performance, but what really matters is that Cognitive Toolkit can already scale up to using a massive number of GPUs. In the latest release, we’ve extended Cognitive Toolkit to natively support Python in addition to C++. Furthermore, the Cognitive Toolkit also now allows developers to use reinforcement learning to train their models. Finally, Cognitive Toolkit isn’t bound to the cloud in any way. You can train models on the cloud but run them on premises or with other hosters. Our goal is to empower anyone to take advantage of this powerful technology.

To quickly get up to speed on the Toolkit, we’ve published Azure Notebooks with numerous  tutorials and we’ve also assembled a DNN Model Gallery with dozens of code samples, recipes and tutorials across scenarios working with a variety of datasets: images, numeric, speech and text.

What Others Are Saying

In the “Benchmarking State-of-the-Art Deep Learning Software Tools” paper published in September 2016, academic researchers have run a comparative study of the state-of-the-art GPU-accelerated deep learning software tools, including Caffe, Cognitive Toolkit (CNTK), TensorFlow, and Torch. They’ve benchmarked the running performance of these tools with three popular types of neural networks on two CPU platforms and three GPU platforms. Our Cognitive Toolkit outperformed other deep learning toolkits on nearly every workload.

Furthermore, Nvidia recently has also run a benchmark comparing all the popular Deep Learning toolkits with their latest hardware. The results show that the Cognitive Toolkit trains and evaluates deep learning algorithms faster than other available toolkits, scaling efficiently in a range of environments—from a CPU, to GPUs, to multiple machines—while maintaining accuracy. Specifically, it’s 1.7 times faster than our previous release and 3x faster than TensorFlow on Pascal GPUs (as presented at SuperComputing’16 conference).

End users of deep learning software tools can use these benchmarking results as a guide to selecting appropriate hardware platforms and software tools. Second, for developers of deep learning software tools, the in-depth analysis points out possible future directions to further optimize performance.

Real-world Deep Learning Workloads

We at Microsoft use Deep Learning and the Cognitive Toolkit in many of our internal services, ranging from digital agents to core infrastructure in Azure.

1. Agents (Cortana): Cortana is a digital agent that knows who you are and knows your work and life preferences across all your devices. Cortana has more 133 million users and has intelligently answered more than 12 billion questions. From speech recognition to computer vision in Cortana – all these capabilities are powered by Deep Learning and Cognitive Toolkit. We have recently made a major breakthrough in speech recognition, creating a technology that recognizes the words in a conversation and makes the same or fewer errors than professional transcriptionists. The researchers reported a word error rate (WER) of 5.9 percent, down from the 6.3 percent, the lowest error rate ever recorded against the industry standard Switchboard speech recognition task. Reaching human parity using Deep Learning is a truly historic achievement.

Our approach to image recognition also placed first in several major categories of the ImageNet and the Microsoft Common Objects in Context challenges. The DNNs built with our tools won first place in all three categories we competed in: classification, localization and detection. The system won by a strong margin, because we were able to accurately train extremely deep neural nets, 152 layers – much more than in the past – and it used a new “residual learning” principle. Residual learning reformulates the learning procedure and redirects the information flow in deep neural networks. That helped solve the accuracy problem that has traditionally dogged attempts to build extremely deep neural networks.

2. Applications: Our applications, from Office 365, Outlook, PowerPoint, Word and Dynamics 365 can use deep learning to provide new customer experiences. One excellent example of a deep learning application the bot used by Microsoft Customer Support and Services. Using Deep Neural Nets and the Cognitive Toolkit, it can intelligently understand the problems that a customer is asking about, and recommend the best solution to resolve those problems. The bot provides a quick self-service experience for many common customer problems and helps our technical staff focus on the harder and more challenging customer issues.

Another example of an application using Deep Learning is the Connected Drone application built for powerline inspection by one of our customers eSmart (to see the Connected Drone in action, please watch this video). eSmart Systems began developing the Connected Drone out of a strong conviction that drones combined with cloud intelligence could bring great efficiencies to the power industry. The objective of the Connected Drone is to support and automate the inspection and monitoring of power grid infrastructure instead of the currently expensive, risky, and extremely time consuming inspections performed by ground crews and helicopters. To do this, they use Deep Learning to analyze video data feeds streamed from the drones. Their analytics software recognizes individual objects, such as insulators on power poles, and directly links the new information with the component registry, so that inspectors can quickly become aware of potential problems. eSmart applies a range of deep learning technologies to analyze data from the Connected Drone, from the very deep Faster R-CNN to Single Shot Multibox Detectors and more.

3. Cloud Services (Cortana Intelligence Suite): On Azure, we offer a suite for Machine Learning and Advanced Analytics, including Cognitive Services (Vision, Speech, Language, Knowledge, Search, etc.), Bot Framework, Azure Machine Learning, Azure Data Lake, Azure SQL Data Warehouse and PowerBI, called the Cortana Intelligence Suite. You can use these services along with the Cognitive Toolkit or any other deep learning framework of your choice to deploy intelligent applications. For instance, you can now massively parallelize scoring using a pre-trained DNN machine learning model on an HDInsight Apache Spark cluster in Azure. We are seeing a growing number of scenarios that involve the scoring of pre-trained DNNs on a large number of images, such as our customer Liebherr that runs DNNs to visually recognize objects inside a refrigerator. Developers can implement such a processing architecture with just a few steps (see instructions here).

A typical large-scale image scoring scenario may require very high I/O throughput and/or large file storage capacity, for which the Azure Data Lake Store (ADLS) provides a high performance and scalable analytical storage. Furthermore, ADLS imposes data schema on read, which allows the user to not worry about the schema until the data is needed. From the user’s perspective, ADLS functions like any other HDFS storage account through the supplied HDFS connector. Training can take place on an Azure N-series NC24 GPU-enabled Virtual Machine or using recipes from the Azure Batch Shipyard, which allows training of our DNNs with bare-metal GPU hardware acceleration in the public cloud using as many as four NVIDIA Tesla K80 GPUs. For scoring, one can use HDInsight Spark Cluster or Azure Data Lake Analytics to massively parallelize the scoring of a large collection of images with the rxExec function in Microsoft R Server (MRS) by distributing the workload across the worker nodes. The scoring workload is orchestrated from a single instance of MRS and each worker node can read and write data to ADLS independently, in parallel.

SQL Server, our premier database engine, is “becoming deep” as well. This is now possible for the first time with R and ML built into SQL Server. Pushing deep learning models inside SQL Server, our customers now get throughput, parallelism, security, reliability, compliance certifications and manageability, all in one. It’s a big win for data scientists and developers – you don’t have to separately build the management layer for operational deployment of ML models. Furthermore, just like data in databases can be shared across multiple applications, you can now share the deep learning models.  Models and intelligence become “yet another type of data”, managed by the SQL Server 2016. With these capabilities, developers can now build a new breed of applications that marry the latest transaction processing advancements in databases with deep learning.

4. Infrastructure (Azure): Deep Learning requires a new breed of high performance infrastructure that is able to support the computationally intensive nature of deep learning training. Azure now enables these scenarios with its N-Series Virtual machines that are powered by NVIDIA&;s Tesla K80 GPUs that are best in class for single and double precision workloads in the public cloud today. These GPUs are exposed via a hardware pass-through mechanism called Discreet Device Assignment that allows us to provide near bare-metal performance. Additionally, as data grows for these workloads, data scientists have the need to distribute the training not just across multiple GPUs in a single server, but to a number of GPUs across nodes. To enable this distributed learning need across tens or hundreds of GPUs, Azure has invested in high-end networking infrastructure for the N-Series using a Mellanox&039;s InfiniBand fabric which allows for high bandwidth communication between VMs with less than 2 microseconds latency. This networking capability allows for libraries such as Microsoft&039;s own Cognitive Toolkit (CNTK) to use MPI for communication between nodes and efficiently train with a larger number of layers and great performance.

We are also working with NVIDIA on a best in class roadmap for Azure with the current N-Series as the first iteration of that roadmap. These Virtual Machines are currently in preview and recently announced General Availability of this offering starting on 1st of December.

It is easy to get started with deep learning on Azure. The Data Science Virtual Machine (DSVM) is available in the Azure Marketplace, and comes pre-loaded with a range of deep learning frameworks and tools for Linux and Windows. To easily run many training jobs in parallel or launch a distributed job across more than one server, Azure Batch “Shipyard” templates are available for the top frameworks. Shipyard takes care of configuring the GPU and InfiniBand drivers, and uses Docker containers to setup your software environment.

Lastly, our team of our engineers and researchers has created a system that uses a reprogrammable computer chip called a field programmable gate array, or FPGA, to accelerate Bing and Azure. Utilizing the FPGA chips, we can now write Deep Learning algorithms directly onto the hardware, instead of using potentially less efficient software as the middle man. What’s more, an FPGA can be reprogrammed at a moment’s notice to respond to new advances in AI/Deep Learning or meet another type of unexpected need in a datacenter. Traditionally, engineers might wait two years or longer for hardware with different specifications to be designed and deployed. This is a moonshot project that’s succeeded and we are bringing this now to our customers.

Join Us in Shaping the Future of AI

Our focus on innovation in Deep Learning is across the entire stack of infrastructure, development tools, PaaS services and end user applications. Here are a few of the benefits our products bring:

Greater versatility: The Cognitive Toolkit lets customers use one framework to train models on premises with the NVIDIA DGX-1 or with NVIDIA GPU-based systems, and then run those models in the cloud on Azure. This scalable, hybrid approach lets enterprises rapidly prototype and deploy intelligent features.
Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. For example, NVIDIA DGX-1 with Pascal and NVLink interconnect technology is 170x faster than CPU servers with the Cognitive Toolkit.
Wider availability: Azure N-Series virtual machines powered by NVIDIA GPUs are currently in preview to Azure customers, and will be generally available in December. Azure GPUs can be used to accelerate both training and model evaluation. With thousands of customers already part of the preview, businesses of all sizes are already running workloads on Tesla GPUs in Azure N-Series VMs.
Native integration with the entire data stack: We strongly believe in pushing intelligence close to where the data lives. While a few years ago running Deep Learning inside a database engine or a Big Data engine might have seemed like a science fiction, this has now become real. You can run deep learning models on massive amounts of data, e.g., images, videos, speech and text, and you can do it in bulk. This is the sort of capability brought to you by Azure Data Lake, HDInsight and SQL Server. You can also now join now the results of deep learning with any other type of data you have and do incredibly powerful analytics and intelligence over it (which we now call “Big Cognition”). It’s not just extracting one piece of cognitive information at a time, but rather joining and integrating all the extracted cognitive data with other types of data, so you can create seemingly magical “know-it-all” cognitive applications.

Let me invite all developers to come and join us in this exciting journey into AI applications.

@josephsirosh
Quelle: Azure

Digital Transformation with SAP HANA on Azure Large Instances

At Ignite 2016, Jason Zander announced a plethora of Azure Services and features. One among them that has got people excited is the announcement that SAP HANA on Azure (Large Instances) is generally available. For those who might have missed it in the blitz, here are the key things you should know:

Transform, migrate, innovate at your own pace: Azure has a purpose-built approach to provide organizations cloud benefits for their SAP estate – both traditional as well as SAP HANA OLAP and OLTP production deployments. Read more about the SAP certifications. You will always find something new.

No Compromises: The approach is to marry the benefits of bare-metal SKUs that are unencumbered by virtualization in terms of ability to scale, provide superior and consistent performance while surpassing expectations on availability, business continuity and development and operational agility of the cloud.

The proof is in the pudding. After supporting the largest scale SAP HANA deployments up to 3 TB in October, we are now announcing general availability of an even larger scale – 4TB Scale up and 32 TB Scale out on Dec 1, 2016 , proving that we will continue on this blistering pace. We are the first Hyper-Scale cloud vendor delivering Intel Broadwell based solutions, scaling to 192 threads. For more information, read more about scale.

SAP HANA Large Instances offer an availability SLA of 99.99% for an HA pair, the highest among all hyper-scale public cloud vendors. These instances provide built-in infrastructure support for backup and restore, high availability (HA) and disaster recovery (DR) scenarios. Additionally, these instances have integrated support with partners, including SUSE Linux Enterprise, Red Hat Enterprise Linux and SAP, so you can confidently bring your production workloads to Azure.

Customer Confidence: We were pleasantly surprised at how aggressively our customers like Coats plc are taking advantage and realizing results:

By moving SAP HANA to Azure we have been able to speed up planning cycles and accelerate delivery of finished goods to our customers.  We have also activated real time reporting to monitor and improve process productivity across our global supply chain.

Richard Cammish, Global CIO, Coats plc

The potential for using data in smarter ways to operate more efficiently, save money, and satisfy customers is immense.  Azure gives us integrated tools that let us fully interrogate and exploit our data.

Harold Groothedde, Chief Technology Officer, Coats plc

SAP Partnership: After seeing Satya Nadella and Jim McDermott on stage at Sapphire 2016 one does not need any more proof that it is a decades long strategic partnership that is critical to enterprises. But more proof arrived within 60 days – SAP chose Azure as the cloud to run their fastest growing and most exciting SAP HANA based SaaS platform, SuccessFactors HCM Suite.

Digital Transformation with Azure and SAP HANA: It is important to not lose sight of why all of this is so crucial in the first place. In the world defined by Uber, Netflix, online retail it is clear to CEOs that their survival depends on Digital transformation. And for the 200,000 organizations that manage their LOB applications with SAP, their ability to transform is gated by them traversing two journeys. Destination – Cloud and SAP HANA. And since the two are inextricably related, they need a single strategic partner, who is a premier public cloud vendor, has a long-term relationship with SAP and who understands how to work with enterprises.

After talking to a few CIOs and service managers, it was clear to me that they are faced with a series of conflicting demands where neither choice made at the cost of other is palatable. The diagram below is my consolidated view of that dilemma.

 

 

Watch this 8 minute Video that provides more detail on how Microsoft Azure approaches this problem in a unique way, so that you don’t have to make compromises and why you need to contact your account team to set up a design workshop.  If you are not sure who to contact, visit request information and someone from Microsoft will reach out.To learn more technical details, visit Getting Started with SAP on Azure.

Quelle: Azure

Managing StorSimple virtual arrays in the new Azure portal

We’re happy to announce the management of the StorSimple Virtual Device Series is now available in the new Azure portal. You can use the StorSimple extension in the new portal to create Azure Resource Manager based StorSimple managers to manage your virtual arrays. What’s new Enhanced user experience and improved navigation Optimized and multiple workflows for efficient task completion Integrated Support and Diagnostics experiences Support for inbuilt Azure roles and ability to manage access through custom roles How to get started You can create a new StorSimple Device Manager in the Azure portal to manage your virtual arrays by navigating to: + NEW –> Storage –> StorSimple Virtual Device Series.  You can register one or more virtual arrays to this newly created StorSimple Device Manager by navigating to the specific Manager –> Resource menu –> Quick start to download and register a new virtual array. Additionally, by navigating to Browse –> ‘Filter’ on StorSimple Device Managers, you will be able to: View and manage all StorSimple Device Managers created in the new portal. View all StorSimple Device Managers created in the classic portal. However, you will continue to manage these resources through the classic portal, until we migrate them to the new portal. More information on migration to the new portal is covered further in this article. Managing your StorSimple virtual arrays in the new Azure portal The enhanced user experience makes it easy to manage your virtual arrays within the new Azure portal. The resource menu contains all the options to manage, monitor and troubleshoot your virtual arrays. Some of the frequently performed operations on the virtual array are easily accessible through the top-level command bar. The StorSimple service summary blade provides aggregated information across the virtual arrays in a particular resource. This blade is designed to give you a quick summary on usage, alerts etc., and serve as starting point to deep dive into further details, both from the tiles on the blade as well as from the resource menu on the left. Additionally, you can diagnose and potentially resolve common issues with your virtual arrays through the troubleshooting content that is available right within the Azure portal. You can also log a support ticket to request assistance from Microsoft Support. To learn more about how to manage your StorSimple Virtual Arrays in the portal, please refer to the product documentation. Migration of StorSimple Virtual Device Series resources from the classic portal Your existing StorSimple Virtual Device Series resources in the classic portal will be migrated to the new Azure Portal in the coming weeks. We will reach out to you with more details on the date as well as the details of the migration. Stay tuned! Please note this migration will be seamless and there will be no downtime to your virtual arrays. Once the migration is complete: All your StorSimple Virtual Device resources in the classic portal will be managed through the new Azure portal. The StorSimple Virtual Device Series management will no longer be available on the classic portal. The StorSimple Physical Device Series will continue to be managed via the classic portal. You will be able to view your StorSimple Physical Device Series resources in the new portal, but you will continue to manage them from the classic portal. We will keep you posted about the transition of the physical device series to the new Azure portal. For more information on the new portal, refer to the  blog post, which compares and contrasts the user experience in the new Azure portal and the classic portal. To learn more about how to manage your StorSimple virtual arrays in the portal, please refer to the product documentation.
Quelle: Azure

Microsoft Azure Storage Explorer: November update and summer recap

One year ago we released the very first version of Microsoft Azure Storage Explorer. At the beginning we only supported blobs on Mac OS and Windows. Since then, we&;ve added the ability to interact with queues, tables and file shares. We started shipping for Linux and we&039;ve kept adding features to support the capabilities of Storage Accounts.

In this post, we first want to thank our users for your amazing support! We appreciate all the feedback we get: your praise encourages us, your frustrations give us problems to solve, and your suggestions help steer us in the right direction. The developers behind Storage Explorer and I have been using this feedback to implement features based on what you liked, what needed improvement, and what you felt was missing in the product.

Today, we&039;ll elaborate on these features, including what&039;s new in the November update (0.8.6) and what we&039;ve shipped since our last post.

November release downloads: [Windows] [Mac OS] [Linux]

New in November:

Quick access to resources
Tabs
Improved upload/download speeds and performance
High contrast theme support
Return of scoped search
"Create new folder" for blobs
Edit blob and file properties
Fix for screen freeze bug

Major features from July-October:

Grouping by subscriptions and local resources
Ability to sign-off from accounts
Rename for blobs and files
Rename for blob containers, queues, tables, and file shares
Deep search
Improved table query experience
Ability to save table queries
CORS Settings
Managing blob leases
Direct links for sharing
Configuring of proxy settings
UX improvements

November features

For this release we focused on the features that most help with productivity when working across multiple Storage Accounts and services. With this in mind we implemented quick access to resources, the ability to open multiple services in tabs, and vastly improved the upload and download speeds of blobs.

Quick Access

The top of the tree view now contains a "Quick Access" section, which displays resources you want to access frequently. You can add any Storage Accounts, blob containers, queues, tables, or file shares to the Quick Access list. To add resources to this list, right-click on the resource you want to access and select "Add to Quick Access".

Tabs

This has long been requested in feedback, so we&039;re pleased to share you can now have multiple tabs! You can open any blob container, queue, table, or file share in a tab by double-clicking it. Single-clicking on a resource will open it in a temporary tab, the contents of which change depending on which service you have single-clicked on the left-hand tree view. You can make the temporary tab permanent by clicking on the tab name. This emulates patterns set by Visual Studio Code.

Upload/download performance improvements

On the performance front, we&039;ve made major improvements to the upload and download speeds of blobs. The new speeds are approximately 10x faster than our previous releases. This improvement primarily impacts large files such as VHDs, but also benefits the upload and download of multiple files.

Folders and property editing

Before this release, you could only see the properties of a specific file or blob. With this release you&039;ll have the ability to modify the value of editable properties, such as cache control or content type. You can right-click on the blob or file to see and edit their properties.

We&039;ve also added support for creating empty "virtual" folders in blob containers. Now you can create folders before uploading any blobs to it, rather than only being able to create them in the "Upload blob" dialog.

Usability and reliability

Last but not least, we worked on features and bug fixes to improve overall usability and reliability. First, we&039;ve brought back the ability to search within a Storage Account or service. We know a lot of you missed this feature, so now you have two ways of searching your resources:

Global search: Use the search box to search for any Storage Accounts or services
Scoped search: Use the magnifying glass to search within that node of the tree view

We also improved usability by adding support for themes in Storage Explorer. There are four themes available: light (default), dark, and two high-contrast themes. You can change the theme by going to the Edit menu and selecting "Themes."

Lastly, we fixed a screen freeze issue that had been impacting Storage Explorer when starting the app or using the Windows + arrow keys to move it around the screen. Based on our testing we believe this issue is fully fixed, but if you run into it please do let us know.

Summer features

After completing support for the full set of Storage services, we pivoted on improving the experience for connecting to your Storage Accounts and managing their content. This allowed us to open up our backlog to work on the major features we shipped in November.

Account management

One of the main areas we wanted to improve was the display of Storage Accounts in the left-hand tree view. The tree now shows Storage Accounts grouped by subscription, as well as a separate section for non-subscription resources. This "(Local and Attached)" section lists the local development storage (on Windows) and any Storage Accounts you&039;ve attached via either account name and key or SAS URI. It also contains a "SAS-Attached Services" node, which displays all services (such as blob containers) that you&039;ve added with SAS .

If you&039;re behind a firewall, you&039;ve likely had issues with signing into Storage Explorer. To help mitigate this, we&039;ve added the ability to specify proxy settings. To modify Storage Explorer proxy settings, you can select the "Configure proxy settings…" icon in the left-side toolbar.

Lastly, we&039;ve also modified the experience when you first sign-in so that all the subscriptions you have under the Azure account are displayed. You can modify this behavior in the account settings pane either by filtering subscriptions under an account, or by selecting the "Remove" button to completely sign-off from an account.

Copying and renaming

In the summer months we also added the ability to copy and rename blob containers, queues, tables, and file shares. You can also copy and rename blobs, blob folders, files, and file directories.

To copy and rename, we first create a copy of all the resources selected and move them if necessary. In the case of a rename, we delete the original files once the copy operation is completed successfully.

It&039;s possible to copy within an account as well as from one storage account to another, regardless of how you&039;re connected to it. The copy is done on the server-side, so it&039;s a fast operation that does not require disk space on your machine.

CORS, leases, and sharing

We&039;ve also improved the way to manage the access and rules of your storage resources. At the storage account level, you can now add, edit, and delete CORS rules for each of the services. You can do this by right-clicking on the node for either blob containers, queues, tables, or file shares, and selecting the "Configure CORS Settings…" option.

You can also control the actions you can take on blobs by creating and breaking leases for blobs and blob containers. Blobs with leases will be marked by a "lock" icon beside the blob, while blob containers with leases will have the word "(Locked)" displayed next to the blob container name. To manage leases, you can right-click on the resource for which you want to break or acquire a lease.

We also added the ability to share direct links to the resources in your subscription. This allows another person (who also has access to your subscription) to click on a link that will open up Storage Explorer and navigate to the specific resource you shared. To share a direct link, right-click on the Storage Account or blob container, queue, table, or file share you want the other person to access and select "Get Direct Link…."

Writing and saving queries

Lastly, we made significant improvements to the table querying functionality. The new query builder interface allows you to easily query your tables without having to know ODATA. With this query builder you can create AND/OR statements and group them together to search for any field in your table. You still can switch to the ODATA mode by selecting the "Text Editor" button at the top of the query toolbar.

Additionally, you have the ability to save and load any queries you have created, regardless of whether you use the builder or the editor to construct your queries.

Summary

Although we&039;ve done a lot of big features, we know there&039;s still gaps. Blob snapshots, stats and counts about the contents of your services, and support for Azure Stack are among the features for which we&039;ve heard a lot of requests. If you notice anything missing form that list or have any other comments, issues, or suggestions, you can send us feedback directly from Storage Explorer.

Thanks for making our first year a fantastic one!

– The Storage Explorer Team
Quelle: Azure

Leaderboard to recognize the most valuable Database Systems contributors on MSDN

Microsoft has long relied on members of the developer and IT Professional community to help other community members in need. This is especially true of SQL Server and Azure SQL Database, where our community members help by answering over 2,000 new questions every month on the MSDN forums alone. In order to recognize and honor community members who are most active on forums, we are going to publish monthly leaderboards. This is a pilot effort for Database Systems, including SQL Server, Azure SQL Database, Azure SQL Data Warehouse, and SQL Server VMs on Azure questions on MSDN.​​​

Members get points for providing accepted answers fast. The following will be the points hierarchy (in decreasing order of points):
The Leaderboard will be published once a month, and the answers given between beginning and end of respective the calendar month will be scored. For example, answers given between 00:01 September 1, 2016 and 23:59 September 30, 2016 are considered for the September 2016 ranking. The leaderboard is shared among all members of the community, including Microsoft employees and affiliates. The All Database Systems leaderboard is based on all forums related to Microsoft’s Database Systems products and services. The Cloud Database leaderboard is based on forums related to Azure SQL Database, Azure SQL Data Warehouse and SQL on Azure VM.
 
The leaderboard will soon operate from its own webpage. We thought it would be a good idea to publish the October 2016 Leaderboard as a pilot, while the website gets ready. 

Congratulations to the members who made it to the October 2016 leaderboard!

For questions related to this leaderboard, please write to leaderboard-sql@microsoft.com

FAQs

1. What if there are multiple answers? What if there are multiple answers accepted?
All answers will get the same points.
 
2. What if I give the answer in a particular month and it gets accepted in another month?
You get points for unaccepted answer in the month of answering, and the additional points for answering in the month of acceptance.
 
3. How do I find my own score on the leaderboard?
The leaderboard will get its own website soon where all members of the community can find their monthly scores. For the pilot, we plan to publish the names of the Top 10 contributors.

4. Which questions count within these forums?
All questions within the forums related to SQL Server, Azure SQL Database, Azure SQL Data Warehouse, SQL Server VMs on Azure will have equal weighting.
 
5. MSDN already publishes a leaderboard. How is this different?
We find that Database Systems related questions require a specific and unique skillset compared to all other MSDN forums. This is an effort by the product team to recognize people who make use of this special skill set for the greater good of the community.
Quelle: Azure

Azure Site Recovery now supports Windows Server 2016

On September 26, 2016, at the Ignite conference in Atlanta, Microsoft launched the newest release of the server operating system – Windows Server 2016. It is a cloud-ready OS that can be used to run traditional applications and datacenter infrastructure, and at the same time, delivers innovation to help customers transition workloads, to a more secure, efficient and agile cloud model.

Azure Site Recovery is at the intersection of this OS, combined with improved features for your disaster recovery needs. We are excited to announce Azure Site Recovery’s support for Windows Server 2016. Customers can now use Azure Site Recovery to replicate, protect (or migrate) their Hyper-V virtual machines hosted on a Windows Server 2016 to Azure or to a secondary site. 

This week, we are announcing Azure Site Recovery’s support for protection & replication of virtual machines deployed on a Hyper-V Server 2016 in the following configuration. We will continue our journey to enhance our support for this Cloud OS platform, in the coming months.

In addition to the support for recovering workloads hosted on a Windows Server 2016, to a secondary datacenter or to Azure, Azure Site Recovery has some significant features that customers can take advantage of:

A guided Getting Started experience which removes the complexity of setting up DR and makes it easier to protect and replicate your workloads.
Recovery Plans and Azure Automation to enable a one-click orchestration of your DR plans.
Ability to perform DR drills (test failovers) to confirm readiness, which guarantees zero data loss.
Replicate your data once, and use it to perform disaster recovery, migrate workloads, or create DevTest environments in Azure.
Coexistence of classic & ARM deployment models: Azure originally provided only the classic deployment model. With the ARM based deployment model, you can deploy, manage, and monitor all the services for your solution as a group, rather than handling these services individually. You can choose either of these deployment models for your fail-over VMs in Azure Site recovery.
RPO & RTO objective – All DR actions are assured to be accurate, and consistent and are designed to help you meet your Recovery Time Objective (RTO) & Recovery Point Objective goals

Organizations can now use Windows Server 2016 combined with the enhanced capabilities of Azure Site Recovery to tackle operational and security challenges, achieve cloud-integrated disaster recovery or cloud migration. We envision, that this will assist in optimizing your IT resources to strategize and innovate solutions which will help in driving business success.

You can check out additional product information, and start replicating your workloads to Microsoft Azure using Azure Site Recovery. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the ASR UserVoice to let us know what features you want.
Quelle: Azure