Announcing Azure Analysis Services general availability

Today at the Data Amp event, we are announcing the general availability of Microsoft Azure Analysis Services, the latest addition to our data platform in the cloud. Based on the proven analytics engine in SQL Server Analysis Services, Azure Analysis Services is an enterprise grade OLAP engine and BI modeling platform, offered as a fully managed platform-as-a-service (PaaS). Azure Analysis Services enables developers and BI professionals to create BI Semantic Models that can power highly interactive and rich analytical experiences in BI tools and custom applications.

Why Azure Analysis Services?

The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required – finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics – before they can explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

Integrated with the Azure data platform

Azure Analysis Services is the latest addition to the Azure data platform. It integrates with many Azure data services enabling customers to build sophisticated analytics solutions.

Azure Analysis Services can consume data from Azure SQL Database and Azure SQL Data Warehouse. Customers can build enterprise data warehouse solutions in Azure using a hub-and-spoke model, with the SQL data warehouse at the center and multiple BI models around it targeting different business groups or subject areas.
With more and more customers adopting Azure Data Lake and HDInsight, Azure Analysis Services will soon offer the ability to build BI models on top of these big data platforms, enabling a similar hub-and-spoke model as with Azure SQL Data Warehouse.
In addition to the above, Azure Analysis Services can also consume data from on-premises data stores such as SQL Server, Oracle, and Teradata. We are working on adding support for several more data sources, both cloud and on-premises.
Azure Data Factory is a data integration service that orchestrates the movement and transformation of data, a core capability in any enterprise BI/analytics solution. Azure Analysis Services can be integrated into any Azure Data Factory pipeline by including an activity that loads data into the model. Azure Automation and Azure Functions can also be used for doing lightweight orchestration of models using custom code.
Power BI and Excel are industry leading data exploration and visualization tools for business users. Both can connect to Azure Analysis Services models and offer a rich interactive experience. In addition, third party BI tools such as Tableau are also supported.

How are customers using Azure Analysis Services?

Since we launched the public preview of Azure Analysis Services last October, thousands of developers have been using it to build BI solutions. We want to thank all our preview customers for trying out the product and giving us valuable feedback. Based on this feedback, we have made several quality, reliability, and performance improvements to the service. In addition, we introduced Scale Up & Down and Backup & Restore to allow customers to better manage their BI solutions. We also introduced the B1, B2, and S0 tiers to offer customers more pricing flexibility.

Following are some customers and partners that have built compelling BI solutions using Azure Analysis Services.

Milliman is one of the world&;s largest providers of actuarial and related products and services. They built a revolutionary, industry first, financial modeling product called Integrate, using Azure to run highly complex and mission critical computing tasks in the cloud.

“Once the complex data movement and transformation processing is complete, the resulting data is used to populate a BI semantic model within Azure Analysis Services, that is easy to use and understand. Power BI allows users to quickly create and share data through interactive dashboards and reports, providing a rich immersive experience for users to visualize and analyze data in one place, simply and intuitively. The combination of Power BI and Azure Analysis Services enables users of varying skills and backgrounds to be able to deliver to the ever-growing BI demands needed to run their business and collaborate on mission critical information on any device.”

Paul Maher, Principal and CTO, Milliman Life Technology Solutions

“Another great use case for Azure Analysis Services is leveraging its powerful modeling capabilities to bring together numerous disparate corporate data sources. An initiative at Milliman is currently in design leveraging various Finance data sets in order to create a broader scope and more granular access to critical business information. Providing a cohesive and simple-to-access data source for all levels of users gives the business leaders a new tool – whether they use Excel or Power BI for their business analytics.”

Andreas Braendle, CIO, Milliman

Contidis is a company in Angola that is building the new Candando supermarket chain. They created a comprehensive BI solution using Power BI and Azure Analysis Services to help their employees deliver better customer service, uncover fraud, spot inventory errors, and analyze the effectiveness of store promotions.

“Since we implemented our Power BI solution with Azure Analysis Services and Azure SQL Data Warehouse, we’ve realized a big improvement in business insight and efficiency. Our continued growth is due to many factors, and Power BI with Azure Analysis Services is one of them.”

Renato Correia, Head of IT and Innovation, Contidis

DevScope is a Microsoft worldwide partner who is helping customers build solutions using Azure Analysis Services.

“One of the great advantages of using Azure Analysis Services and Power BI is that it gives us the flexibility to start small and scale up only as fast as we need to, paying only for the services we use. We also have a very dynamic security model with Azure Analysis Services and Azure Active Directory and  in addition to providing row-level security, we use Analysis Services to monitor report usage and send automated alerts if someone accesses a report or data record that they shouldn’t."

Rui Romano, BI Team Manager, DevScope

Azure Analysis Services is now generally available in 14 regions across the globe: Southeast Asia, Australia Southeast, Brazil South, Canada Central, North Europe, West Europe, West India, Japan East, UK South, East US 2, North Central US, South Central US, West US, and West Central US. We will continue to add regions based on customer demand, including government and national clouds.

Please use the following resources to learn more about Azure Analysis Services, get your questions answered, and give us feedback and suggestions about the product.

Overview
Documentation
Pricing
MSDN forum
Ideas & suggestions

Join us at the Data Insights Summit (June 12-13, 2017) or at one of the user group meetings where you can hear directly from our engineers and product managers.
Quelle: Azure

Microsoft Cognitive Services – General availability for Face API, Computer Vision API and Content Moderator

This post was authored by the Cognitive Services Team​.

Microsoft Cognitive Services enables developers to create the next generation of applications that can see, hear, speak, understand, and interpret needs using natural methods of communication. We have made adding intelligent features to your platforms easier.

Today, at the first ever Microsoft Data Amp online event, we’re excited to announce the general availability of Face API, Computer Vision API and Content Moderator API from Microsoft Cognitive Services.

Face API detects human faces and compares similar ones, organizes people into groups according to visual similarity, and identifies previously tagged people and their emotions in images.
Computer Vision API gives you the tools to understand the contents of any image. It creates tags that identify objects, beings like celebrities, or actions in an image, and crafts coherent sentences to describe it. You can now detect landmarks and handwriting in images. Handwriting detection remains in preview.
Content Moderator provides machine assisted moderation of text and images, augmented with human review tools. Video moderation is available in preview as part of Azure Media Services.

Let’s take a closer look at what these APIs can do for you.

Bring vision to your app

Previously, users of Face API could obtain attributes such as age, gender, facial points, and headpose. Now, it’s also possible to obtain emotions in the same Face API call. This responds to some user scenarios in which both age and emotions were requested simultaneously. Learn more about Face API in our guides.

Recognizing landmarks

We’ve added more richness to Computer Vision API by integrating landmark recognition. Landmark models, as well as Celebrity Recognition, are examples of Domain Specific Models. Our landmark recognition model recognizes 9,000 natural and man-made landmarks from around the world. Domain Specific Models is a continuously evolving feature within Computer Vision API.

Let’s say I want my app to recognize this picture I took while traveling:

You could have an idea about where this comes from, but how could a machine easily know it?

In C#, we can leverage these capabilities by making a simple REST API call as the following. By the way, other languages are at the bottom of this post.

using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;

namespace CSHttpClientSample
{
static class Program
{
static void Main()
{
Console.Write("Enter image file path: ");
string imageFilePath = Console.ReadLine();

MakeAnalysisRequest(imageFilePath);

Console.WriteLine("nnHit ENTER to exit…n");
Console.ReadLine();
}

static byte[] GetImageAsByteArray(string imageFilePath)
{
FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}

static async void MakeAnalysisRequest(string imageFilePath)
{
var client = new HttpClient();

// Request headers. Replace the second parameter with a valid subscription key.
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "putyourkeyhere");

// Request parameters. You can change "landmarks" to "celebrities" on requestParameters and uri to use the Celebrities model.
string requestParameters = "model=landmarks";
string uri = "https://westus.api.cognitive.microsoft.com/vision/v1.0/models/landmarks/analyze?" + requestParameters;
Console.WriteLine(uri);

HttpResponseMessage response;

// Request body. Try this sample with a locally stored JPEG image.
byte[] byteData = GetImageAsByteArray(imageFilePath);

using (var content = new ByteArrayContent(byteData))
{
// This example uses content type "application/octet-stream".
// The other content types you can use are "application/json" and "multipart/form-data".
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(uri, content);
string contentString = await response.Content.ReadAsStringAsync();
Console.WriteLine("Response:n");
Console.WriteLine(contentString);
}
}
}
}

The successful response, returned in JSON would be the following:

“`json
{
"requestId": "b15f13a4-77d9-4fab-a701-7ad65bcdcaed",
"metadata": {
"width": 1024,
"height": 680,
"format": "Jpeg"
},
"result": {
"landmarks": [
{
"name": "Colosseum",
"confidence": 0.9448209
}
]
}
}
“`

Recognizing handwriting

Handwriting OCR is also available in preview in Computer Vision API. This feature detects text in a handwritten image and extracts the recognized characters into a machine-usable character stream.
It detects and extracts handwritten text from notes, letters, essays, whiteboards, forms, etc. It works with different surfaces and backgrounds such as white paper, sticky notes, and whiteboards. No need to transcribe those handwritten notes anymore; you can snap an image instead and use Handwriting OCR to digitize your notes, saving time, effort, and paper clutter. You can even decide to do a quick search when you want to pull the notes up again.

You can try this out yourself by uploading your sample in the interactive demonstration.

Let’s say that I want to recognize the handwriting in the whiteboard:

An inspiration quote I’d like to keep.

In C#, I would use the following:

using System;
using System.IO;
using System.Collections;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;

namespace CSHttpClientSample
{
static class Program
{
static void Main()
{
Console.Write("Enter image file path: ");
string imageFilePath = Console.ReadLine();

ReadHandwrittenText(imageFilePath);

Console.WriteLine("nnnHit ENTER to exit…");
Console.ReadLine();
}

static byte[] GetImageAsByteArray(string imageFilePath)
{
FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}

static async void ReadHandwrittenText(string imageFilePath)
{
var client = new HttpClient();

// Request headers – replace this example key with your valid subscription key.
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "putyourkeyhere");

// Request parameters and URI. Set "handwriting" to false for printed text.
string requestParameter = "handwriting=true";
string uri = "https://westus.api.cognitive.microsoft.com/vision/v1.0/recognizeText?" + requestParameter;

HttpResponseMessage response = null;
IEnumerable<string> responseValues = null;
string operationLocation = null;

// Request body. Try this sample with a locally stored JPEG image.
byte[] byteData = GetImageAsByteArray(imageFilePath);
var content = new ByteArrayContent(byteData);

// This example uses content type "application/octet-stream".
// You can also use "application/json" and specify an image URL.
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");

try {
response = await client.PostAsync(uri, content);
responseValues = response.Headers.GetValues("Operation-Location");
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}

foreach (var value in responseValues)
{
// This value is the URI where you can get the text recognition operation result.
operationLocation = value;
Console.WriteLine(operationLocation);
break;
}

try
{
// Note: The response may not be immediately available. Handwriting recognition is an
// async operation that can take a variable amount of time depending on the length
// of the text you want to recognize. You may need to wait or retry this operation.
response = await client.GetAsync(operationLocation);

// And now you can see the response in in JSON:
Console.WriteLine(await response.Content.ReadAsStringAsync());
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
}
}

Upon success, the OCR results returned include text, bounding box for regions, lines, and words through the following JSON:

{
"status": "Succeeded",
"recognitionResult": {
"lines": [
{
"boundingBox": [
542,
724,
1404,
722,
1406,
819,
544,
820
],
"text": "You must be the change",
"words": [
{
"boundingBox": [
535,
725,
678,
721,
698,
841,
555,
845
],
"text": "You"
},
{
"boundingBox": [
713,
720,
886,
715,
906,
835,
734,
840
],
"text": "must"
},
{
"boundingBox": [
891,
715,
982,
713,
1002,
833,
911,
835
],
"text": "be"
},
{
"boundingBox": [
1002,
712,
1129,
708,
1149,
829,
1022,
832
],
"text": "the"
},
{
"boundingBox": [
1159,
708,
1427,
700,
1448,
820,
1179,
828
],
"text": "change"
}
]
},
{
"boundingBox": [
667,
905,
1766,
868,
1771,
976,
672,
1015
],
"text": "you want to see in the world !",
"words": [
{
"boundingBox": [
665,
901,
758,
899,
768,
1015,
675,
1017
],
"text": "you"
},
{
"boundingBox": [
752,
900,
941,
896,
951,
1012,
762,
1015
],
"text": "want"
},
{
"boundingBox": [
960,
896,
1058,
895,
1068,
1010,
970,
1012
],
"text": "to"
},
{
"boundingBox": [
1077,
894,
1227,
892,
1237,
1007,
1087,
1010
],
"text": "see"
},
{
"boundingBox": [
1253,
891,
1338,
890,
1348,
1006,
1263,
1007
],
"text": "in"
},
{
"boundingBox": [
1344,
890,
1488,
887,
1498,
1003,
1354,
1005
],
"text": "the"
},
{
"boundingBox": [
1494,
887,
1755,
883,
1765,
999,
1504,
1003
],
"text": "world"
},
{
"boundingBox": [
1735,
883,
1813,
882,
1823,
998,
1745,
999
],
"text": "!"
}
]
}
]
}
}

To easily get started in your preferred language, please refer to the following:

The Face API page and quick-start guides for on C#, Java, Python, and many more.
The Computer Vision API page and quick-start guides on C#, Java, Python, and more.
The Content Moderator Page and test drive Content Moderator to learn how we enable a complete, configurable content moderation lifecycle.

For more information about our use cases, don’t hesitate to take a look at our customer stories, including a great use of our Vision APIs with GrayMeta.

Happy coding!
Quelle: Azure

The Agility and Flexibility of Docker including Oracle Database and Development Tools

A company’s important applications often are subjected to random and capricious changes due to forces well beyond the control of IT or management.  Events like a corporate merger or even a top programmer on an extended vacation can have an adverse impact on the performance and reliability of critical company infrastructure.
During the second day keynote at DockerCon 2017 in Austin TX, Lily Guo and Vivek Saraswat showed a simulation of how to use Enterprise Edition and its application transformation tools to respond to random events that threaten to undermine the stability of their company critical service.
The demo begins as two developers are returning to work after an extended vacation.  They discover that, during their absence, their CEO has unexpectedly hired an outside contract programmer to rapidly code and introduce an entire application service that they know nothing about.  As they try to build the new service, however, Docker Security Scan detects that a deprecated library has been incorporated by the contractor.  This library is found to have a security vulnerability which violates the company’s best practice standards.  As part of Docker Enterprise Edition Advanced, Docker Security Scan automatically keeps track of code contributions and acts as a gatekeeper to flag issues and protect company standards.   In this case, they are able to find a newer version of the library and build the service successfully.
The next step is to deploy the service.   Docker Compose is the way to describe the application dependencies and secrets access.   It is tempting to simply insert the passwords into the Compose file using plain text. However, the best choice is to let Docker Secrets manage sensitive application configuration data and take advantage of Docker EE with its  ability to manage and enforce RBAC (Role Based Access Control).
It is interesting that the service consists of a Microsoft SQL Server database container that is interacting with other containers that are running Linux.  Docker Enterprise Edition features this ability to run a cluster of microservices in a hybrid Windows and Linux environment.  “It just works.”
All of the problems from the beginning of the demo now seem to be resolved, but the CEO rushes in to announce that they have just purchased a company that uses a traditional on premise application. The merger press announcement will be tomorrow and there is concern about the scope and cost of updating and moving the application to a modern infrastructure. However, they know that can use the Docker transformation tool, image2docker, to do the hard work of taking the traditional application and moving it to a modern Docker Enterprise Edition containers which can be deployed on any infrastructure, including Cloud.
One final step step is needed to complete the move from the traditional architecture.   As the traditional application relies on the popular and powerful Oracle Database, it will need to be acquired and adapted.  Time to go out to the Docker Store.    Lily finds the Oracle DB on Docker Store and integrates it directly into the transformed application &; and “it just works”
 The Docker Store is the place where developers can find trusted and scanned commercial content with collaborative support from Docker and the application container image provider.   Oracle today announced that its flagship databases and developer tools will be immediately available as Docker containers through the Docker Store marketplace.  The first set of certified images include: Oracle Database, Oracle MySQL, Oracle WebLogic Server, Oracle Coherence, Oracle Instant Client, and Oracle Java 8 SE (Server JRE).   
The demo ends and it’s been shown how developers can use Docker Enterprise Edition to quickly resolve a library incompatibility issues and how easy it is to take traditional applications and accomplish the first steps towards adapting them to a modern container infrastructure.

Agility and Flexibility of Docker including @Oracle Database and Development ToolsClick To Tweet

 
The post The Agility and Flexibility of Docker including Oracle Database and Development Tools appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Guest post: Supercharging container pipelines with Codefresh and Google Cloud

By Raziel Tabib, CEO and Co-founder, Codefresh

[Editor’s note: Today we hear from Codefresh, which makes a Docker-native continuous integration/continuous delivery (CI/CD) platform. Read on to learn how Codefresh’s recent integrations with Kubernetes and Google Container Registry will make it easier for you to build, test and deploy your cloud-native applications to Google Cloud, including Container Engine and Kubernetes.]

Traditional pipelines weren’t designed with containers and cloud services in mind. At Codefresh, we’ve built our platform specifically around Docker and cloud services to simplify the entire pipeline and make it easier to build, test and deploy web apps. We recently partnered with Google Cloud to add two key features into our platform: an embedded registry (powered by Google’s own Container Registry) and one-click deploy to Google Container Engine.

Advantages of an embedded registry
Codefresh’s embedded registry doesn’t replace production registries but rather provides a developer-focused registry for testing and development. The production registry becomes a single source of truth for production grade images, while Codefresh’s embedded registry maintains the images needed for development.

This approach has a couple of other big advantages:

Image quality control is higher since it’s built right into the test flow
Build-assist images (for example, those used with Java and other compiled languages) stay nicely organized in the dev space
Codefresh extends the images with valuable metadata (e.g., test results, commit info, build SHA, logs, issue id, etc.), creating a sandbox-like registry for developers
Build speed is faster since the embedded registry is “closer” to the build machines

The embedded registry also allows developers to call images by tag and extended metadata from the build flow. For example, if you want to test a service based on how it works with different versions of another service, you can reference images based on their git commit ID (build SHA).

To try out the embedded registry, you’ll need to join the beta.

One-click deploy to Kubernetes
We manage the Codefresh production environment with Kubernetes running on Container Engine. Because we use Codefresh to build, test and deploy Codefresh itself, we wanted to make sure there was a simple way to deploy to Kubernetes. To do that, we’re adding Kubernetes deployment images to Codefresh, available both in the UI and Codefresh YAML. The deploy images contain a number of scripts that make pushing new images a simple matter of passing credentials. This makes it easy to automate the deployments, and when paired with branch permissions, makes it easy for anyone authorized to approve and push code to production.

To try this feature in Codefresh, just select the deploy script in the pipeline editor and add the needed build arguments. For more information checkout our documentation on deploying to Kubernetes.

Or add this code to your Codefresh.yml

deploy-to-kubernetes-staging:
image: codefreshio/kubernetes-deployer:master
tag: latest
working-directory: ${{initial-clone}}
commands:
– /deploy/bin/deploy.sh ./root
environment:
– ENVIRONMENT=${{ENVIRONMENT}}
– KUBERNETES_USER=${{KUBERNETES_USER}}
– KUBERNETES_PASSWORD=${{KUBERNETES_PASSWORD}}
– KUBERNETES_SERVER=${{KUBERNETES_SERVER}}
– DOCKER_IMAGE_TAG=${{CF_REVISION}}

Migrating to Google Cloud’s Container Engine
For those migrating to Container Engine or another Kubernetes environment, the Codefresh deploy images simplify everything. Pushing to Kubernetes is cloud agnostic — just point it at your Kubernetes deployment, and you’re good to go.

About Codefresh, CI/CD for Docker
Codefresh is CI for Docker used by open source and business. We automatically deploy and scale build and test infrastructure for each Docker image. We also deploy shareable environments for every code branch. Check it out https://codefresh.io/ and join the embedded registry beta.
Quelle: Google Cloud Platform