Modern Data Warehousing with Continuous Integration

This post was co-authored by Mohit Chand, Principal SWE Lead, Microsoft IT. We are going a step beyond the traditional methods of developing a data warehouse by adopting CI practices, which are more prevalent for API (.NET) based applications. It has been long pending for data warehouse teams to catch up on the modern software engineering practices. With the emergence of Visual Studio Online (VSTS) & SQL Server Data Tools (SSDT), spinning environments on the fly, deploying code across environments with maximum automation has become easy. We adopted these modern practices to boost engineering productivity in our Business Insights (DW) project. With the help of SSDT and VSTS, we were able to align DW deployment perfectly with the Agile releases (2 weeks sprint). In this blog I will elaborate a detailed approach on how to implement CI for your Data Warehouse. I will explain the life cycle of a business user story starting from code branching, pull-request-triggered-build, Azure resources and environment provisioning, schema deployment, seed data generation, daily-integration releases with automated tests, and approval based workflows to promote new code to higher environments. DevOps Why DevOps? In traditional development and operations model there is always a possibility of confusion and debate when the software doesn’t function as expected. Dev would claim the software working just fine in their respective environment and defend that as an Ops problem. Ops would indicate that Devs didn’t provide a production ready software, and it’s a Dev problem. How do we solve this? Wouldn’t it be a good idea for a single team takes care of development, testing, and operations? We work closely with business and other stake holders to efficiently deliver better and faster results to customers. DevOps has enabled us to deliver faster with a better a connection with customers while simultaneously reducing our technical debt and risks.   DW Architecture This Data Warehouse uses Azure technologies. Data arrives to the landing zone or staging area from different sources through Azure Data Factory. We use Azure Data Factory (ADF) jobs to massage and transform data into the warehouse. Once ready, the data is available to customers in the form of dimension and fact tables.   Tools/Technologies This Modern Data Warehouse primarily uses Microsoft technologies to deliver the solution to customers – SQL Azure (PaaS) Azure Data Factory Azure Blob Storage SQL MDS 2016 Visual Studio Team Services (VSTS) Agile and Kanban board Code branching (Git) Gated check-ins Automated Tests Build Release Plan In agile scrum, user story is the unit of implementation. Engineers pick up and deliver the user stories in any given sprint.   Code Branching Strategy With Agile code branching plays a critical role. There are various ways to do it including sprint branching, feature branching, story/bug branching, etc. In our case we adopted user story level branching. A contributor creates a branch from the “develop” branch against each story he/she picks up. It is the contributor’s responsibility to maintain this isolated branch and merge it by creating a “Pull Request” with the develop branch once the story is complete or ready for code review. A contributor is not allowed to merge his/her code with the main stream branch directly. It requires a minimum two code reviews to approve the code. Story based branching enables developers to merge the code frequently with the main stream and avoid working a long time on the same branch. This significantly reduces the code integration issues. Another benefit is that developers can work more efficiently by having other developer’s code available more frequently. Code dependency wait time gets reduced, hence less blockers for developers. Using VSTS, contributor creates a new branch: Nomenclature followed is <contributor name>_<Story or Bug> After creating the branch, the contributor publishes it to make it visible to everyone else on the team.   Once the branching is set the contributor is all set to start with the story and starts writing code to implement the functionality. Code Review and Code Merge Once the code is complete, the developer checks-in the code and creates a pull request using VSTS portal. To ensure a higher level of code quality, it’s imperative to have a gated check-in process in place. Each developer has to ensure the build is not broken when they check-in the code. The code needs to be reviewed and approved by at least two peers before it gets merged with the main stream. Without two code review approvals, it’s not possible to merge the code with the main stream code. A pull request is created by the developer and submitted with appropriate comments and work items.   Build As soon as the “Pull Request” is created by the contributor, the CI build automatically gets fired.   An email notification is received by all the reviewers, which updates them about the new “Pull Request” created by a contributor. The reviewers are now good to starts the code reviews.   Depending upon the quality of the code, the reviewer “Approves”, raise questions, or “Reject” the code. Once all reviewers are done with code reviews, the lead developer merges the code with the main stream. Test (Automation) Ensuring high code quality across various environments could be challenging in a DW project because data might vary from environment to environment. We ensure that every new piece of code we write has automated test cases before creating a pull request. This not only prevented bug leaks to production, but it also ensured a higher quality of deliverables. The diagram below depicts the overall test case execution results. As part of the deployment, we execute all of our test cases to ensure the integrity and quality of the end product. Release & Deploy Once the code is successfully merged with main stream code, a new build fires automatically. The Integration environment gets deployed once a day with the latest code.   The diagram below depicts three environments we manage for the Data Warehouse. We have “Integration”, “End User”, and “Production” environments. The integration environment is a continuous integration and deployment environment, which is provisioned and de-provisioned dynamically and managed as “Infra as a Code”. It is a scheduled process which run the following steps in sequence to “integrate” the check-ins happening daily. Build the new bits by getting the latest from the develop branch (includes integrated code scanning) Create a new Azure Resource Group and procure the SQL Instance Copy the “Seed Data” to the newly created SQL instance Execute any schema renaming Deploy the DACPAC to deploy new schema changes Scale-up the databases to execute the steps faster Copy code bits to build server Deploy additional SQL entities Run data sync jobs Execute test assemblies Deploy Azure Data Factories Decommission environment Seed data to enable automated testing Automated testing in a DW depends a lot on the availability of accurate data. Without data there will be numerous scenarios which cannot be tested. To solve this problem we use production copy of data as “seed data” during deployment. The diagram below depicts how we populate the seed data in our Daily Integration Environment (DIT). In cases where the data is pretty huge, a miniature DB, which contains subset of production data, can be used instead of copying the entire replica. Step-1: Represents the production data with multiple schemas which we use to segregate data in our DW environments (e.g. staging, transformed data, etc.) Step-2: Represents that data gets copied to Azure Geo Replica (Disaster Recovery copy). Step-3: During release deployment we copy the geo replica to the newly procured DIT SQL server instance. Step-4: Represents the availability of production equivalent copy of data. In addition, the DACPACK deployment happens to add the newly added schema and later our test automation suites runs to test the quality of our end product. Deploying release to higher environments Promoting the release from one environment to another is setup through an approval workflow and it is not allowed to deploy directly. In this scenario it is not possible to deploy directly to Production without the approval from pre-assigned stakeholders. The approval workflow depicts the environment promotion. Once the required approvers approve the workflow the release gets promoted to next environment automatically. Monitor As DevOps the same team monitors the pre-prod and prod environments for any failures. We adopted the DRI (directly responsible individual) model and the DRI person proactively monitors and checks the system’s health and all the notifications. Issues, if any, are fixed on priority to ensure continued availability of application. We use out of the box ADF monitoring and notification along with a couple of custom monitoring tools. We also have multiple data quality checks implemented as automated reports that run daily in the production environment and report out data anomalies that can either be fixed as a bug in our processes or be traced back to the source systems quickly for getting fixed there. It’s true that setting up CI for a data warehouse isn’t that simple, however, it’s worth every penny. We did face challenges of test case failures when we add new code during the sprints, however, the team has learned from those instances and now we ensure to update the existing test cases when new code is being added. We are continuously adding functional, build, and environment verification test cases to constantly increase the quality of the product. CI has enabled us to be truly agile and be super confident in our end product. We are able to prevent common bug leaks to production by having an automated test suite. We were able to eliminate the need of the test environment and eyeing to deploy directly in production in coming quarters. We strongly believe it’s possible!
Quelle: Azure

Event Hubs Dedicated Offering

The Event Hubs team is introducing a new offering for dedicated single-tenant deployments for our most demanding customers. This same offering powers Halo 5 on Xbox One, Skype for Business, and Microsoft Office client application telemetry pipelines. At full scale Azure Event Hubs can ingress over two million events per second or up to 2 GB per second of telemetry with fully durable storage and sub-second latency.

Benefits

The core of this offering is the same engine that powers Event Hubs Standard tier, but is provided as a single tenant, dedicated, cluster with the following benefits:

Single tenant hosting with no noise from other tenants – your resources are “isolated”
Message size increases to 1MB as compared to 256KB for Standard and Basic plans
Scalable between 1 and 8 capacity units – providing up to 2 million ingress events per second
Fixed monthly price includes costs for ingress events, throughput units, and Archive
Guaranteed capacity to meet your burst needs
Repeatable performance every time
Zero maintenance

Event Hubs Dedicated Capacity will meet your highest scale telemetry and streaming demands. It offers all features of the Event Hubs Standard plan in a single tenant runtime so that your streams will never be affected by bursts in traffic volumes and provide you with a tried-and-true performance.

Unlike other tiers of Event Hubs, Dedicated Capacity is an all-inclusive fixed monthly price where features, such as extended retention and archive, are provided for no additional fee. Overall, you will find more flexibility around the limits you would see in the other Event Hubs plans. Maximum message size is increased to 1MB and restrictions on the number of brokered connections you can have are significantly eased. Whether your preference is to send many small messages or fewer large messages, both benefit from the flexibility of dedicated capacity.

Find Out More

This platform is now offered to the public through an Enterprise Agreement in varying size configurations as Capacity Units (CU). Each capacity unit provides approximately 200 Throughput Units of capacity. You can scale your dedicated capacity up or down throughout the month to meet your needs by adding or removing capacity units.

Event Hubs dedicated capacity is like having your own Azure region for Event Hubs. It is a fully managed Platform as a Service, where all maintenance such as OS and software patching is taken care of for you by the Event Hubs team.

For estimated pricing, please contact your Microsoft sales representative or Microsoft Support to get additional details about Event Hubs Dedicated Capacity. You can also view the Event Hubs pricing table to view a feature comparison with standard and basic plans. It’ll handle the streaming data you have today and keep you ready for tomorrow.
Quelle: Azure

Manage App Service, SQL Database, and more – Azure Management Libraries for Java

One Java statement to create a Web App. One statement to create a SQL Server and another statement to create a SQL Database. One statement to create an Application Gateway, etc.

Beta 4 of the Azure Management Libraries for Java is now available. Beta 4 adds support for the following Azure services and features:

✓ App Service (Web Apps)

✓ SQL Database

✓ Application Gateway

✓ Traffic Manager

✓ DNS

✓ CDN

✓ Redis Cache

 
 

https://github.com/azure/azure-sdk-for-java

Add the following to your Maven POM file to use Beta 4:

<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure</artifactId>
<version>1.0.0-beta4.1</version>
</dependency>

Last year, we announced previews of the new, simplified Azure management libraries for Java. Our goal is to improve the developer experience by providing a higher-level, object-oriented API, optimized for readability and writability. Thank you for trying the libraries and providing us with plenty of useful feedback.

Create a Web App

You can create a Web app instance by using a define() … create() method chain.

WebApp webApp = azure.webApps()
.define(appName)
.withNewResourceGroup(rgName)
.withNewAppServicePlan(planName)
.withRegion(Region.US_WEST)
.withPricingTier(AppServicePricingTier.STANDARD_S1)
.create();

Create a SQL Database

You can create a SQL server instance by using another define() … create() method chain.

SqlServer sqlServer = azure.sqlServers().define(sqlServerName)
.withRegion(Region.US_EAST)
.withNewResourceGroup(rgName)
.withAdministratorLogin("adminlogin123")
.withAdministratorPassword("myS3cureP@ssword")
.withNewFirewallRule("10.0.0.1")
.withNewFirewallRule("10.2.0.1", "10.2.0.10")
.create();

Then, you can create a SQL database instance by using another define() … create() method chain.

SqlDatabase database = sqlServer.databases().define("myNewDatabase")
.create();

Create an Application Gateway

You can create an application gateway instance by using another define() … create() method chain.

ApplicationGateway applicationGateway = azure.applicationGateways().define("myFirstAppGateway")
.withRegion(Region.US_EAST)
.withExistingResourceGroup(resourceGroup)
// Request routing rule for HTTP from public 80 to public 8080
.defineRequestRoutingRule("HTTP-80-to-8080")
.fromPublicFrontend()
.fromFrontendHttpPort(80)
.toBackendHttpPort(8080)
.toBackendIpAddress("11.1.1.1")
.toBackendIpAddress("11.1.1.2")
.toBackendIpAddress("11.1.1.3")
.toBackendIpAddress("11.1.1.4")
.attach()
.withExistingPublicIpAddress(publicIpAddress)
.create();

Sample code

You can find plenty of sample code that illustrates management scenarios in Azure Virtual Machines, Virtual Machine Scale Sets, Storage, Networking, Resource Manager, SQL Database, App Service (Web Apps), Key Vault, Redis, CDN and Batch.

Service
Management Scenario

Virtual Machines

Manage virtual machine
Manage availability set
List virtual machine images
Manage virtual machines using VM extensions
Create virtual machines from generalized image or specialized VHD
List virtual machine extension images

Virtual Machines – parallel execution

Create multiple virtual machines in parallel
Create multiple virtual machines with network in parallel
Create multiple virtual machines across regions in parallel

Virtual Machine Scale Sets

Manage virtual machine scale sets (behind an Internet facing load balancer)

Storage

Manage storage accounts

Networking

Manage virtual network
Manage network interface
Manage network security group
Manage IP address
Manage Internet facing load balancers
Manage internal load balancers

Networking – DNS

Host and manage domains

Traffic Manager

Manage traffic manager profiles

Application Gateway

Manage application gateways
Manage application gateways with backend pools

SQL Database

Manage SQL databases
Manage SQL databases in elastic pools
Manage firewalls for SQL databases
Manage SQL databases across regions

Redis Cache

Manage Redis Cache

App Service – Web Apps

Manage Web apps
Manage Web apps with custom domains
Configure deployment sources for Web apps
Manage staging and production slots for Web apps
Scale Web apps
Manage storage connections for Web apps
Manage data connections (such as SQL database and Redis cache) for Web apps

Resource Groups

Manage resource groups
Manage resources
Deploy resources with ARM templates
Deploy resources with ARM templates (with progress)

Key Vault

Manage key vaults

CDN

Manage CDNs

Batch

Manage batch accounts

Give it a try

You can run the samples above or go straight to our GitHub repo. Give it a try and let us know what do you think (via e-mail or comments below), particularly –

Usability and effectiveness of the new management libraries for Java.
What Azure services you would like to see supported soon?
What additional scenarios should be illustrated as sample code?

Over the next few weeks, we will be adding support for more Azure services and applying finishing touches to the API.

You can find plenty of additional info about Java on Azure at http://azure.com/java.
Quelle: Azure

How To Build Planet Scale Mobile App in Minutes with Xamarin and DocumentDB

Most mobile apps need to store data in the cloud, and  DocumentDB is an awesome cloud database for mobile apps. It has everything a mobile developer needs, a fully managed NoSQL database as a service that scales on demand, and can bring your data where your users go around the globe — completely transparently to your application. Today we are excited to announce Azure DocumentDB SDK for Xamarin mobile platform, enabling mobile apps to interact directly with DocumentDB, without a middle-tier.

Here is what mobile developers get out of the box with DocumentDB:

Rich queries over schemaless data. DocumentDB stores data as schemaless JSON documents in heterogeneous collections, and offers rich and fast queries without the need to worry about schema or indexes.
Fast. Guaranteed. It takes only few milliseconds to read and write documents with DocumentDB. Developers can specify the throughput they need and DocumentDB will honor it with 99.99% SLA.
Limitless Scale. Your DocumentDB collections will grow as your app grows. You can start with small data size and 100s requests per second and grow to arbitrarily large, 10s and 100s of millions requests per second throughput, and petabytes of data.
Globally Distributed. Your mobile app users are on the go, often across the world. DocumentDB is a globally distributed database, and with just one click on a map it will bring the data wherever your users are.
Built-in rich authorization. With DocumentDB you can easy to implement popular patterns like per-user data, or multi-user shared data without custom complex authorization code.
Geo-spatial queries. Many mobile apps offer geo-contextual experiences today. With the first class support for geo-spatial types DocumentDB makes these experiences very easy to accomplish.
Binary attachments. Your app data often includes binary blobs. Native support for attachments makes it easier to use DocumentDB as one-stop shop for your app data.

Let&;s build an app together!

Step . Get Started

It&039;s easy to get started with DocumentDB, just go to Azure portal, create a new DocumentDB account,  go to the Quickstart tab, and download a Xamarin Forms todo list sample, already connected to your DocumentDB account. 

Or if you have an existing Xamarin app, you can just add this DocumentDB NuGet package. Today we support Xamarin.IOS, Xamarin.Android, as well as Xamarin Forms shared libraries.

Step . Work with data

Your data records are stored in DocumentDB as schemaless JSON documents in heterogeneous collections. You can store documents with different structures in the same collection.

In your Xamarin projects you can use language integtated queries over schemaless data:

Step . Add Users

Like many get started samples, the DocumentDB sample you downloaded above authenticates to the service using master key hardcoded in the app&039;s code. This is of course not a good idea for an app you intend to run anywhere except your local emulator. If an attacker gets a hold of the master key, all the data across your DocumentDB account is compromised.

Instead we want our app to only have access to the records for the logged in user. DocumentDB allows developers to grant application read or read/write access to all documents in a collection, a set of documents, or a specific document, depending on the needs.

Here is for example, how to modify our todo list app into a multi-user todolist app, a complete version of the sample is available here: 

Add Login to your app, using Facebook, Active Directory or any other provider.
Create a DocumentDB UserItems collection with /userId as a partition key. Specifying partition key for your collection allows DocumentDB to scale infinitely as the number of our app users growth, while offering fast queries.
Add DocumentDB Resource Token Broker, a simple Web API that authenticates the users and issues short lived tokens to the logged in users with access only to the documents within the user&039;s partition. In this example we host Resource Token Broker in App Service.
Modify the app to authenticate to Resource Token Broker with Facebook and request the resource tokens for the logged in Facebook user, then access users data in the UserItems collection.  

This diagram illustrates the solution. We are investigating eliminating the need for Resource Token Broker by supporting OAuth in DocumentDB first class, please upvote this uservoice item if you think it&039;s a good idea!

Now if we want two users get access to the same todolist, we just add additional permissions to the access token in Resource Token Broker. You can find the complete sample here.

Step . Scale on demand.

DocumentDB is a managed database as a service. As your user base grows, you don&039;t need to worry about provisioning VMs or increasing cores. All you need to tell DocumentDB is how many operations per second (throughput) your app needs. You can specify the throughput via portal Scale tab using a measure of throughput called Request Units per second (RUs). For example, a read operation on a 1KB document requires 1 RU. You can also add alerts for "Throughput" metric to monitor the traffic growth and programmatically change the throughput as alerts fire.

  

Step . Go Planet Scale!

As your app gains popularity, you may acquire users accross the globe. Or may be you just don&039;t want to be caught of guard if a meteorite strkes the Azure data centers where you created your DocumentDB collection. Go to Azure portal, your DocumentDB account, and with a click on a map, make your data continuously replicate to any number of regions accross the world. This ensures your data is available whereever your users are, and you can add failover policies to be prepared for the rainy day.

We hope you find this blog and samples useful to take advantage of DocumentDB in your Xamarin application. Similar pattern can be used in Cordova apps using DocumentDB JavaScript SDK, as well as native iOS / Android apps using DocumentDB REST APIs.

As always, let us know how we are doing and what improvements you&039;d like to see going forward for DocumentDB through UserVoice, StackOverflow azure-documentdb, or Twitter @DocumentDB.
Quelle: Azure

December 2016 Leaderboard of Database Systems contributors on MSDN

We continue to receive encouraging comments from the community on the Leaderboard. Thank you.
Many congratulations to the top-10 contributors featured on our December leaderboard!

Olaf Helper and Alberto Morillo top the Overall and Cloud database this month. Six of this month’s Overall Top-10 (including all of the top three) featured in last month’s Overall Top-10 as well, and four others are new entrants.

The following continues to be the points hierarchy (in decreasing order of points):

For questions related to this leaderboard, please write to leaderboard-sql@microsoft.com.
Quelle: Azure

Automated Partition Management with Azure Analysis Services

Azure Analysis Services tabular models can store data in a highly-compressed, in-memory cache for optimized query performance. This provides fast user interactivity over large data sets.

Large datasets normally require table partitioning to accelerate and optimize the data-load process. Partitioning enables incremental loads, increases parallelization, and reduces memory consumption. The Tabular Object Model (TOM) serves as an API to create and manage partitions.

The Automated Partition Management for Analysis Services Tabular Models whitepaper is available for review. It describes how to use the AsPartitionProcessing TOM code sample with minimal code changes. It is intended to be generic and configuration driven.

The sample is compatible with Azure Analysis Services. The following diagram shows an example architecture.

Azure SQL Database is used for the configuration and logging database.

Azure Functions is used for execution, and can be triggered in various ways. The following list shows some of the many options available with Azure Functions.

Scheduled using a Timer function CRON expression. In this case, it is not necessary to set up a separate scheduling system.
Using a webhook request for a WebHook function, or an HTTP request for an HttpTrigger function. This allows integration with existing scheduling systems that can call a URL.
Triggered from Azure Queue using built-in integration points in Azure Functions.

Thanks to Marco Russo (SQLBI) and Bill Anton (Opifex Solutions) for their contributions to the whitepaper and code sample.
Quelle: Azure

Azure SQL Data Warehouse: Secondary indexes on column store now available

In Azure SQL Data Warehouse, you can now create secondary B-Tree indexes on column store tables.

Most analytic queries aggregate large amounts of data and are served well by scanning the column store segments directly. However, there is often a need to look for a "needle in a haystack," which translates to a query that does a lookup of a single row or a small range of rows. Such lookup queries can get an improvement in response time in orders of magnitude (even 1,000 times) and potentially run in sub-second if there is a B-Tree index on the filter column.

To create a secondary index on a column store table, follow the same syntax as the generic Create Index Transact-SQL statements.
Quelle: Azure

Can blockchain secure supply chains, improve operations and solve humanitarian issues?

In my last post, I posed the question: What does identity mean in today’s physical and digital world? I was coming off the ID2020 Summit at the UN where we announced our work on blockchain self-sovereign identity. What struck me most was the magnitude of the problems identified and I haven’t been able to stop thinking about solving these already at scale problems ever since. One of the things that Microsoft thinks about when it looks at products is solving problems at scale. It is and has always been a mass market product juggernaut. Yesterday we strove for a vision of “a PC on Every Desk” and today we look to Azure, our hyperscale cloud to solve the world’s productivity problems at scale through massive compute, memory and storage workloads operating at a nice little hum in datacenters that span the globe. But I digress.

The important question is: How do we take this DNA and think about societal problems that already exist at scale? We have proven that most technology in one form or another can perform when architected for scale. But where and how do we start and then how do we penetrate with swarms of adoption to make meaningful impact on society’s greatest problems?

I started to think about where the problem space crosses the corporate and enterprise landscape. What if we can link corporate objectives directly to the problem of child labor? What if we can find businesses that might benefit from the exploitation witnessed and eliminate them? Alas, these approaches are too direct and won’t scale as first steps on this journey.

Another approach would be to look at ways we can back into solving the problems. So I began to think more indirectly about the attack surfaces that corporations operate on where there might be child labor, trafficking or other infractions. If we could identify large attack surfaces in a specific industry that might be a good starting point. This led me squarely to an industry I know very well that has been struggling to evolve ever since Amazon entered their playground: Retailers, Brands, and Ecommerce sites. The landscape gives us an unprecedented opportunity to maximize coverage via a corporate attack surface trifecta:

1. Retailers: think Macy’s, Nordstrom, Best Buy

2. Brands: think Perry Ellis, Nike, Under Armor

3. Ecommerce: Amazon, Tmall, and all of the retail.com variants like Macys.com

The one thing they all have in common is Supply Chain.

Their supply chain ends in the developed world with retail stores to shippers, truck drivers, dock and port workers and their employers. It extends back to overseas warehouses, distribution centers and all of their laborers, whether contracted, full time or part time or temporary workforce. The origin of the supply chain extends all the way back to the factory and local shippers handling the goods.

Take the case of a company like Perry Ellis that is tagging each item of clothing at the manufacturing factory, called source tagging. When a worker hangs a tag on a piece of clothing or sews an RFID tag into a pair of jeans the life of the tracked good begins. Retailers and brands have been evaluating and deploying RFID and other sensor and tracking technology in the Supply Chain and Retail stores for many years. In 2004 I built an RFID Practice at IBM Global Services. That year Walmart demanded all of its suppliers tag their products with RFID.

So, like other blockchain projects, this begs the question, why now? What is so special about this technology that lets us simultaneously add value to Macys.com and Perry Ellis while starting to chip away at one part of the un-identified, trafficked or exploited world population?

I believe the answer lies in a very subtle tweak to the existing tracking systems that are being deployed across the entire retail and ecommerce landscape. What if we could use something like RFID chips and scanners to securely and provably identify every touchpoint of every piece of product all along the supply chain? Why does that add value if we are sort of doing that already?

A little history. The reason tracking tags like RFID are being deployed by brands and retailers is to be able to effectively compete with Amazon.com. What is the single biggest competitive threat a retailer has against Amazon? The answer is a seemingly simple feature on a website:

“Buy Online, Pick up In store”

Underneath what looks like a simple ecommerce site feature lies a very big problem: Inventory Transparency. Retailers cannot effectively invoke this competitive advantage without Inventory Transparency. Retailers cannot achieve Inventory Transparency without RFID or other tracking tags.

RFID systems today simply allow you to track inventory at each waypoint in the supply chain and all the way through the retail store stockroom, floor and checkout. On any given day, you can tell in which part of your supply chain or store replenishment process your product is located. Part of the challenge is that many systems deployed today don’t do a good job providing visibility tracking all the way through the supply chain. This is partly because of siloed tracking systems and databases that live in a multitude of legal entities.

So why is it that even with all of this tracking, there is still a high percentage of lost product due to fraud?

How can a simple identity tweak improve this situation for retailers and brands while simultaneously chipping away at child exploitation?

Enter the trust protocol we call blockchain and specifically blockchain self-sovereign identity. The state change is to plug the holes in your supply chain by identifying and provably tracking every scan at a waypoint by a device or human being operating in your supply chain. This starts with the factory worker who sews the RFID tag into the jeans to the Distribution Center RFID Door reader to the employees or contractors at the DC. It extends to the entities involved, the factory owner, the Contractor that employs the DC contractors, etc. By doing this you create a closed system of verifiably known actors across the supply chain. Blockchain also lets us create a shared single ledger of truth across these independently owned waypoints, driving forward full supply chain inventory transparency. We are using this concept in a number of projects to reduce fraud in other industries. This use case meets my sniff test criteria for blockchain value in a project.

Blockchain can provide value if:

1. There is fraud or lack of trust in the system

2. There are multiple parties to a transaction

This identity tweak takes a corporate system used for tracking goods and transforms it into a system that can reduce fraud, operating costs, and product loss while simultaneously reducing reliance on exploited people, not to mention new opportunities for supply chain financing and insurance.

Should a box of Perry Ellis jeans fall out of the supply chain somewhere, it will immediately be known when it does not reach the next waypoint in the expected time. The forensic investigation will start with the prior transaction that recorded the custody of that item at the prior touch point. A blockchain identity creates the provable record of who last touched the product. This factor holds workers and contracting companies accountable. Reputation gets asserted or attested for that worker and that reputation is rolled up to the Contracting company to create a reputation score for the vendor.

The result is that Supply Chain Contractors will stop hiring unverified or reputationally attested high risk workers. This system of compliance and rating will force Supply Chain workers to be held accountable for their actions therefore reducing fraud (the corporate benefit) while simultaneously requiring a worker to register a self-sovereign identity that travels with them from job to job. This small tweak to existing RFID deployments paid for by corporations will drive undocumented workers from the system over time thus beginning the process of chipping away at the 1.5B undocumented and exploited people identified by the UN’s Sustainable Development Goal 16.9.

Is this the beginning of an opportunity to offer a “Verifiably Clean Supply Chain Certification” for brands, retailers and ecommerce sites? It is certainly achievable technically. My thesis is even if social responsibility isn’t a big enough reason, there are enough financial reasons to move forward. To this end we have launched Project Manifest and made recent announcements with partners like www.Mojix.com and ID2020.  More details in my next post.
Quelle: Azure

Failure Anomalies alert now detects dependency failures

We’ve upgraded Smart Detection – Failure Anomalies so that it monitors your web app’s outgoing dependencies and AJAX calls as well as incoming server requests. If you’re monitoring your app with Application Insights, you’ll be notified within minutes if there’s a sudden disruption or degradation in your app’s performance.

Provided your app has a certain volume of traffic, Smart Detection – Failure Anomalies configures itself. It learns your app’s usual background level of failures. It triggers an alert if the rate of failures goes above the learned pattern. The diagnostic information in the alert can help you fix the problem before most users are aware of it.

Until recently, Smart Detection monitored only failed incoming requests. (Although you can manually set alerts on a wide variety of metrics.) Now, it also monitors the failure rate of dependency calls – that is, calls that your app makes to external services such as REST APIs or SQL databases. This includes both server-side calls, and AJAX calls from the client side of your app.

Here’s a sample of an alert you might get:

This upgrade improves the chances of your finding a fault quickly, especially if it’s caused by one of the services you depend on. You’ll get the alert even if your app returns a non-failure response to your users.

By default, you get a shorter alert mail than this example, but you can switch to this detailed format by selecting “Get more diagnostics…” in Failure Anomalies rule settings:

Quelle: Azure

Collaboration and federation: Azure Service Bus Messaging on-premises futures

Azure Service Bus Messaging is the one of the most powerful message brokers with the deepest feature set available anywhere in public cloud infrastructure today. The global Azure Service Bus broker infrastructure, available in all global Azure regions and the Azure Government cloud, processes nearly 500 Billion message transactions per month. Each cluster in these regions is backed by as many as hundreds of compute cores, Terabytes of memory, and Petabytes of backing storage capacity, far exceeding the cluster deployment scale of any commercial or open source broker you could acquire and run.

As a fully transactional broker that builds on the ISO/IEC standard AMQP 1.0 protocol, Service Bus provides a robust foundation for commercial and financial workloads.It provides strong assurances on message delivery and message retention, with SLA-backed, and sustainably achieved availability and reliability rates unmatched in the industry at its functional depth and scale. The Azure Premium Messaging tier provides performance predictability and further enhanced reliability by exclusively reserving processing resources on a per customer basis inside an environment that provides all the management and cost advantages of cloud scale.

We’re confident to state that Azure Service Bus, delivered from the nearest Azure datacenter over redundant network connectivity, is a choice far superior in terms of cost and reliability to most on-premises messaging cluster installations, even if the core workloads run and remain in an on-premises environment.

Hybrid is the future

The future of hybrid in Azure is twofold. First, we provide world-class services and capabilities with open protocols that can be composed with and leveraged by on-premises services run anywhere. Second, we license the software backing these services for on-premises delivery on top of Azure Stack.

This strategy is also guiding the future for Azure Service Bus Messaging and all other capabilities delivered by the Messaging team, which includes Azure Event Hubs and Azure Relay.

As a consequence, we are announcing today that we will not provide an immediate successor for the standalone Service Bus for the Windows Server 1.1 product. Service Bus for Windows Server 1.1 was shipped as a free download that could be installed inside and outside of the Azure Stack precursor Azure Pack. The product is available as a free download and will go out of mainstream support on January 9, 2018.

While we are continuing to significantly strengthening the commitment to deliver Service Bus, as well as our other messaging technologies, on top of the packaged on-premises Azure technology stack, we will no longer deliver a Windows Server or Windows Client installable message broker outside of that context.

We have come to this conclusion and decision after a careful analysis of market and community needs, trends, and considering what our true technology strengths are.

After decades of monoculture, there has been a “cambric explosion” in messaging platforms. There are many kinds of brokers and messaging libraries that fill many niches. We believe that the breadth of choice customers now have for running messaging middleware on singular special purpose devices, in communication gateways and the fog, on factory floors, in retail stores, in bank branches, inside building control systems, or inside a delivery truck or a container vessel is very, very exciting.

Microsoft Azure’s strengths lay in building and running robust cloud-scale systems that deal with high-volume, high-velocity, consolidated message flows in and through the cloud, via Azure Service Bus Messaging, Azure Relay, and Azure Event Hubs. We believe that “hybrid” also means collaboration and integration to create a “better together” story of a healthy messaging platform ecosystem that fills all the niches across IT and IoT, and that leverages public cloud as a backplane and integration surface.

Microsoft therefore continues to invest in advancing the AMQP and MQTT protocols in OASIS and working with organizations, such as the OPC Foundation, in vertical industries to establish a solid set of choices for messaging protocol standards. Based on that standards foundation, we are looking forward to collaborating with many vendors and communities to build specialized messaging infrastructure, creating federation bridges and integration into and through Azure and Azure Stack. The timeline for availability of our services on Azure Stack will be announced at a future date.
Quelle: Azure