Facebook’s Workplace App Tries To Lure Away Slack Users

Facebook has launched Workplace, a paid chat system for employees to communicate with each other. The service strongly resembles regular Facebook.

You do not have to have an existing Facebook account to use Workplace, as it requires a separate login. Users create profiles with photos and other information just as they would on their personal profiles.

The app, which can be downloaded separately from the App Store or installed on a desktop, includes group chat, a news feed, live video, and direct messaging. It also supports voice and video calling. Employees can chat with one another by joining groups or DMing but will not see their outside-of-work friends in their Workplace news feeds, which only aggregate posts from professional conversations on the Workplace app. Employees can also follow certain groups or their coworkers to receive updates from them.

Image courtesy of Facebook / Via Facebook: workplace

Facebook has priced Workplace at $3 per month per user for companies with up to 1,000 users, $2 per user for up to 10,000 employees, and $1 for more than that. It&;s free for nonprofits and educational institutions. According to Workplace&039;s website, there are no long-term contracts associated with using the service, nor will the service display ads.

Why did Facebook launch an app for work? The company said in a press release that it saw an opening with the new ways people work: “The workplace is about more than just communicating between desks within the walls of an office. Some people spend their entire workday on the go, on their mobile phone. Others spend all day out in the field, or on the road.” The company repeatedly addressed the press release to “anyone” or “any company” and highlighted the app&039;s already-global reach.

Image courtesy of Facebook / Via Facebook: workplace

By contrast, Slack, Workplace&039;s most visible competitor, offers some services for free, a “Standard” version for $6.70 per user, and a “Plus” version for $12.50 a user. Slack also bills itself as a service for “your small- to medium-sized company or team.” And a service called Slack for Enterprise, presumably for larger companies, is in the works, according to the company&039;s website. Even beyond Slack, office chat technology is a crowded market; Workplace is also competing against Microsoft&039;s Yammer, Salesforce&039;s Chatter, and Hipchat.

Reactions on social media were mixed:

A source at Facebook said that the company will not use data collected from Workplace to target users with advertising, and that companies can control their own data via Facebook&039;s data API. According to Workplace&039;s website, Facebook is in the process of certifying the product under the EU-US Privacy Shield Framework to help companies in the EU comply with EU data transfer requirements.

Recode reports that the social network has been beta testing Workplace for the past two years with roughly a thousand companies. Facebook originally said the service would be live at the end of 2015, also according to Recode.

Quelle: <a href="Facebook’s Workplace App Tries To Lure Away Slack Users“>BuzzFeed

Azure IoT Gateway SDK integrates support for Azure Functions

At Microsoft, we believe the edge of the network plays a critical role in IoT, not only in IoT devices themselves but also in intermediate field gateways.  Earlier this year, we announced the open source Azure IoT Gateway SDK, our approach to accelerating the development of IoT edge scenarios such as supporting legacy devices, minimizing latency, conserving network bandwidth, and addressing security concerns.  Since then, we’ve been busy improving and enhancing our SDK completely out in the open.

Today, we are happy to announce an exciting new capability we’ve added to the IoT Gateway SDK: Support for Azure Functions.  With Azure Functions integration, developers can easily call cloud-based logic from their IoT gateway. Just write an Azure Function and you can quickly call it from a Function Module in the Azure IoT Gateway SDK.

For example: if something goes wrong in your field gateway environment, such as local devices that can’t connect or misbehave, and you want to upload diagnostic information to your Azure IoT solution for inspection by operations, our new Functions integrations makes this simple. Just create an Azure Function that takes this data, stores it and alerts operations – and then call it from your gateway running the Azure IoT Gateway SDK when you encounter a problem.

The Azure IoT Gateway SDK supports everything from low-level modules written in C to connect the broad variety of deployed devices, to high level modules for productivity such as our new Azure Functions support.  The best part of the Azure IoT Gateway SDK is how easy it is to chain these modules together to create reusable processing pipelines that suit your needs.

We’ve already seen some great success stories from businesses benefitting from our approach to edge intelligence, and we’re looking forward to seeing what our customers and partners will create with this exciting new capability.
Quelle: Azure

Azure compliance white paper-o-rama

Following national and regional regulations of the countries your business operates in is not an easy task, yet it is an absolute necessity as businesses across all industries see their customer bases expand geographically. Whether you’re a business or an organization operating within the boundaries of a single country or across the globe, you can confidently move to the cloud and still maintain alignment with regional and international requirements. To help our customers understand how to deploy in Azure while successfully interpreting US and international governance requirements, we produced a series of documents that can be leveraged during your cloud adoption journey.

The following white papers include guidance for US law enforcement, US education, UK G-cloud, and Cloud services in Germany, Malaysia, New Zealand, Singapore, and Australia. These papers shed light on the nuances we want our customers to be aware of when interacting with government or regional authorities as it relates to adopting Azure cloud services.
 
Here&;s a short summary of our most recently produced white papers:

 The CJIS Implementation Guidelines for Azure Government, Office 365 Government, Dynamics CRM Online Government white paper is designed to provide insight into the Criminal Justice Information Services (CJIS) security controls applicable to Microsoft Cloud services, and provide guidance to law enforcement agencies on where to access detailed information to assist in CJIS audits. This document provides guidelines and resources to assist CJIS Systems Agencies (CSA) and law enforcement agencies (LEA) in implementing and utilizing Microsoft Government Cloud features, which meet the applicable CJIS certification standards and are consistent with FBI CJIS Security Policy.
The FERPA Implementation Guide for Microsoft Azure white paper helps educational organizations that are considering a move to Azure and are looking for guidance in designing and operating solutions that incorporate security controls to help them meet their compliance challenges. This paper provides insight into how Microsoft meets its compliance obligations on the platform and presents best practices and security principles that are aligned to the Family Educational Rights and Privacy Act (FERPA), International Organization for Standardization (ISO) 27001, Microsoft’s Security Development Lifecycle (SDL), and operational security for online security.  
The Microsoft Cloud Germany for commercial customers in the European Union (EU) and European Free Trade Association (EFTA) white paper provides guidance on how to store and manage customer data in compliance with applicable German laws and regulations as well as key international standards. By leveraging the Microsoft developed data trustee model that provides and enables European customers to move to the cloud, EU and EFTA customers can achieve compliance while utilizing Azure cloud services.
The Microsoft Azure Compliance in the context of Malaysia Security and Privacy Requirements white paper addresses Malaysian regional compliance matters in the context of Malaysia Security and Privacy Requirements. Read this white paper to learn more about the questions faced by customers in Malaysia who are considering a move to the cloud.
The Microsoft Azure Compliance in the context of New Zealand Security and Privacy Requirements white paper is written for IT decision makers in New Zealand who are considering whether to move their data to Microsoft Azure. This paper addresses questions like: Does Microsoft Azure meet New Zealand’s compliance requirements? Where is data stored and who can access it? What is Microsoft doing to protect data? How can a customer verify that Microsoft is doing what it says? New Zealand organizations in need of meeting compliance requirements can read this paper to learn about Azure key security and privacy principles that will enable them to meet their compliance goals.
The Microsoft Azure Compliance in the context of Australia Security and Privacy Requirements white paper is written for Australian organizations looking to navigate their country-specific security and privacy requirements. Protecting data, monitoring and securing access, and meeting customer promises are achieved by Azure through implementing security and privacy principles, enabling Australian customers to leverage our cloud offerings with confidence. 
The Microsoft Azure Compliance in the context of Singapore Security and Privacy Requirements white paper addresses the Singapore standards Multi-Tier Cloud Security (MTCS) and how Microsoft complies with the Singapore Personal Data Privacy Act (PDPA). This means both government and commercial customers can have confidence knowing they comply with Singapore legislative and certification requirements when deploying data to the cloud.
The 14 Cloud Security Controls for UK cloud using Microsoft Azure white paper provides customer strategies on moving their services to Azure while meeting their UK obligations mandated by the CESG/NSCS. Customers of the UK can learn how Azure can be used to help address the 14 controls outlined in the cloud security principles. This paper also outlines how customers can move faster and achieve more while saving money as they adopt Azure cloud services.

These white papers represent a set of new guidance created to help customers understand local laws and governance issues, and provide insight into the local regulatory requirements when deploying to the cloud. Check out these papers as well as other useful guidance on the Microsoft Trust Center.
Quelle: Azure

Helm Charts: making it simple to package and deploy common applications on Kubernetes

There are thousands of people and companies packaging their applications for deployment on Kubernetes. This usually involves crafting a few different Kubernetes resource definitions that configure the application runtime, as well as defining the mechanism that users and other apps leverage to communicate with the application. There are some very common applications that users regularly look for guidance on deploying, such as databases, CI tools, and content management systems. These types of applications are usually not ones that are developed and iterated on by end users, but rather their configuration is customized to fit a specific use case. Once that application is deployed users can link it to their existing systems or leverage their functionality to solve their pain points.For best practices on how these applications should be configured, users could look at the many resources available such as: the examples folder in the Kubernetes repository, the Kubernetes contrib repository, the Helm Charts repository, and the Bitnami Charts repository. While these different locations provided guidance, it was not always formalized or consistent such that users could leverage similar installation procedures across different applications.So what do you do when there are too many places for things to be found?xkcd StandardsIn this case, we’re not creating Yet Another Place for Applications, rather promoting an existing one as the canonical location. As part of the Special Interest Group Apps (SIG Apps) work for the Kubernetes 1.4 release, we began to provide a home for these Kubernetes deployable applications that provides continuous releases of well documented and user friendly packages. These packages are being created as Helm Charts and can be installed using the Helm tool. Helm allows users to easily templatize their Kubernetes manifests and provide a set of configuration parameters that allows users to customize their deployment. Helm is the package manager (analogous to yum and apt) and Charts are packages (analogous to debs and rpms). The home for these Charts is the Kubernetes Charts repository which provides continuous integration for pull requests, as well as automated releases of Charts in the master branch. There are two main folders where charts reside. The stable folder hosts those applications which meet minimum requirements such as proper documentation and inclusion of only Beta or higher Kubernetes resources. The incubator folder provides a place for charts to be submitted and iterated on until they’re ready for promotion to stable at which time they will automatically be pushed out to the default repository. For more information on the repository structure and requirements for being in stable, have a look at this section in the README.The following applications are now available:Stable repositoryIncubatingrepositoryDrupalConsulJenkinsElasticsearchMariaDBetcdMySQLGrafanaRedmineMongoDBWordpressPatroniPrometheusSparkZooKeeperExample workflow for a Chart developerCreate a chartDeveloper provides parameters via the values.yaml file allowing users to customize their deployment. This can be seen as the API between chart devs and chart users.A README is written to help describe the application and its parameterized values.Once the application installs properly and the values customize the deployment appropriately, the developer adds a NOTES.txt file that is shown as soon as the user installs. This file generally points out the next steps for the user to connect to or use the application.If the application requires persistent storage, the developer adds a mechanism to store the data such that pod restarts do not lose data. Most charts requiring this today are using dynamic volume provisioning to abstract away underlying storage details from the user which allows a single configuration to work against Kubernetes installations.Submit a Pull Request to the Kubernetes Charts repo. Once tested and reviewed, the PR will be merged.Once merged to the master branch, the chart will be packaged and released to Helm’s default repository and available for users to install. Example workflow for a Chart userInstall HelmInitialize HelmSearch for a chart $ helm searchNAME VERSION DESCRIPTION stable/drupal 0.3.1 One of the most versatile open source content m…stable/jenkins 0.1.0 A Jenkins Helm chart for Kubernetes. stable/mariadb 0.4.0 Chart for MariaDB stable/mysql 0.1.0 Chart for MySQL stable/redmine 0.3.1 A flexible project management web application. stable/wordpress 0.3.0 Web publishing platform for building blogs and …Install the chart$ helm install stable/jenkinsAfter the install Notes:1. Get your ‘admin’ user password by running:  printf $(printf ‘%o’ `kubectl get secret –namespace default brawny-frog-jenkins -o jsonpath=”{.data.jenkins-admin-password[*]}”`);echo2. Get the Jenkins URL to visit by running these commands in the same shell:**** NOTE: It may take a few minutes for the LoadBalancer IP to be available.                      ********       You can watch the status of by running ‘kubectl get svc -w brawny-frog-jenkins’ ****  export SERVICE_IP=$(kubectl get svc –namespace default brawny-frog-jenkins -o jsonpath='{.status.loadBalancer.ingress[0].ip}’)  echo http://$SERVICE_IP:8080/login3. Login with the password from step 1 and the username: adminFor more information on running Jenkins on Kubernetes, visit here.ConclusionNow that you’ve seen workflows for both developers and users, we hope that you’ll join us in consolidating the breadth of application deployment knowledge into a more centralized place. Together we can raise the quality bar for both developers and users of Kubernetes applications. We’re always looking for feedback on how we can better our process. Additionally, we’re looking for contributions of new charts or updates to existing ones. Join us in the following places to get engaged:SIG Apps – Slack ChannelSIG Apps – Weekly MeetingSubmit a Kubernetes Charts IssueA big thank you to the folks at Bitnami, Deis, Google and the other contributors who have helped get the Charts repository to where it is today. We still have a lot of work to do but it’s been wonderful working together as a community to move this effort forward.–Vic Iglesias, Cloud Solutions Architect, GoogleDownload KubernetesGet involved with the Kubernetes project on GitHub Post questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updates
Quelle: kubernetes

Continuous delivery tops IT execs’ priority list

Innovation is undoubtedly all around us, from advancements in AI to quantum computing. Who wouldn’t want to capitalize on the value of digital transformation?
But what’s truly driving these big moves? It’s due in no small part to the ability of IT organizations to speed software delivery.
Enterprise Management Associates’ latest research report, derived from a survey of 600 executives conducted in October 2015, lists best practices for DevOps and continuous delivery at high-performing companies. It likewise evaluates the role of automation and release management tools in promoting digital transformation.
According to the report, businesses are indeed making the connection between accelerated delivery of software services and business growth. In fact, they are overwhelmingly making “automation of the continuous delivery process” their top technology-related initiative for supporting digital transformation this year.
This sentiment bodes well for automated release management solutions such as IBM UrbanCode Deploy, which helps companies reduce — if not completely eliminate — the potential pitfalls typically associated with the software deployment process.
Not just IT’s problem
As the momentum of innovation ramps up, IT and departments focused on business transformation are increasingly reliant on DevOps and continuous delivery.
What may be surprising is that the drivers for continuous delivery are not purely an IT problem to solve. In many cases, they are business and consumer-related, according to EMA, which has been spearheading research on these topics over the past two years.
Companies that have been able to accelerate software speed by 10 percent are more likely to double their revenue growth than companies who aren’t focused on delivery frequency, EMA Research Director Julie Craig points out in this summary video:

Craig points out:
If you aren’t able to deliver software faster, then your competition is going to continue to outpace you in terms of growth. It’s when you get automation in place that you can seamlessly deliver software releases in a way that supports speed at scale, and both speed and scale support quality.
More key findings

Ninety-seven percent of respondents have DevOps teams within their companies with dedicated personnel, 60 percent of whom are considered dedicated employees.

Companies in which DevOps interactions were rated as excellent or above average were 11.5 times more likely to have double-digit revenue growth than those who rated these interactions as average or poor.

Production troubleshooting was the biggest bottleneck to accelerating continuous delivery.

Download the 17-page report summary, “Automating for Digital Transformation: Tools-Driven DevOps and Continuous Software Delivery in the Enterprise.”
 
The post Continuous delivery tops IT execs’ priority list appeared first on news.
Quelle: Thoughts on Cloud

Introducing a new era of customer support: Google Customer Reliability Engineering

Posted by Dave Rensin, Director of Google Customer Reliability Engineering (CRE)

In the 25 years that I’ve been in technology nearly everything has changed. Computers have moved out of the labs and into our pockets. They’re connected together 24/7 and the things we can do with them are starting to rival our most optimistic science fiction.

Almost nothing looks the same as it did back then — except customer support. Support is (basically) still people in call centers wearing headsets. In this new world, that old model just isn’t enough.

We want to change that.

Last week, we announced a brand new profession at Google: Customer Reliability Engineering, or CRE. The mission of this new role is to create shared operational fate between Google and our Google Cloud Platform customers, to give you more control over the critical applications you’re entrusting to us, and share a fate greater than money.

Reducing customer anxietyWhen you look out at organizations adopting cloud, you can’t help but notice high levels of anxiety.

It took me a while to figure out a reasonable explanation, but here’s where I finally landed:

Humans are evolutionarily disposed to want to control our environment, which is a really valuable survival attribute. As a result, we don’t react well when we feel like we’re losing that control. The higher the stakes, the more forcefully we react to the perceived loss.

Now think about the basic public cloud business model. It boils down to:

Give up control of your physical infrastructure and (to some extent) your data. In exchange for that uncertainty, the cloud will give you greater innovation, lower costs, better security and more stability.

It’s a completely rational exchange, but it also pushes against one of our strongest evolutionary impulses. No wonder people are anxious.

The last several years have taught me that many customers will not eat their anxieties in exchange for lower prices — at least not for long. This is especially true in cloud because of the stakes involved for most companies. There have already been a small number of high-profile companies going back on-prem because the industry hasn’t done enough to recognize this reality.

Cloud providers ignore this risk at their own peril and addressing this anxiety will be a central requirement to unlock the overwhelming majority of businesses not yet in the cloud.

The support mission
The support function in organizations used to be pretty straightforward: answer questions and fix problems quickly and efficiently. Over time, much of the entire IT support function has been boiled down to FAQs, help centers, checklists and procedures.

In the era of cloud technology, however, this is completely wrong.

Anxious customers need empathy, compassion and humanity. You need to know that you’re not alone and that we take you seriously. You are, after all, betting your businesses on our platforms and tools.

There’s only one true and proper mission of a support function in this day and age:

              Drive Customer Anxiety -> 0

People who aren’t feeling anxious don’t spend the time and effort to think seriously about leaving a platform that’s working for them. The decision to churn starts with an unresolved anxiety.

Anxiety = 1 / Reliability
It seems obvious to say that the biggest driver of customer anxiety is reliability.

Here’s the non-obvious part, though.

Cloud customers don’t really care about the reliability of their cloud provider — you care about the reliability of your production application. You only indirectly care about the reliability of the cloud in which you run.

The reliability of an application is the product of two things:
The reliability of the cloud provider
The reliability inherent in the design, code and operations of your application

Item (1) is a pretty well understood problem in the industry. There are thousands of engineers employed at the major cloud vendors that focus exclusively on it.

Here at Google we pioneered a whole profession around it: Site Reliability Engineering (SRE).

We even wrote a book!

What about item (2)? Who’s worried about the reliability inherent in the design, implementation and operation of your production application?

So far, just you.

The standard answer in the industry is:
Here are some white papers, best practices and consultants. Don’t do silly things and your app will be mostly fine.
As an industry, we’re asking you to bet your livelihoods on our platforms, to let us be your business partner and to give up big chunks of control. And in exchange for that we’re giving you . . . whitepapers.

No wonder you’re anxious. You should be!

No matter how much innovation, speed or scale your cloud provider gives you, this arrangement will always feel unbalanced — especially at 3am when something goes wrong.

Perhaps you think I’m overstating the case?

Just a few months ago Dropbox announced that it was leaving their public cloud provider to go back on-prem. They’ve spoken at length about their decision making process around this choice and have expressed a strong desire to more fully “control their own destiny.” The cumulative weight of their loss of control just got to be too much. So they left.

SRE 101The idea behind Google CRE comes from the decade-long journey of Google SRE. I realize you might not be familiar with the history of SRE, so let me spend a couple paragraphs to catch you up . . .
. . .  there were two warring kingdoms — developers and operations.

The developers were interested in building and shipping interesting and useful features to users. The faster the innovation, the better. In the developer tribe’s perfect world there would never be a break in the development and deployment of new and awesome products.

The operations kingdom, on the other hand, was concerned with the reliability of the systems being shipped, because they were the ones getting paged at 3am when something went down. Once the system became stable they’d rather never ship anything new again since 100% of new bugs come from new code.

For decades these kingdoms warred and much blood was spilled. (OK. Not actual blood, but the emails could get pretty testy . . . )

Then, one day this guy had an idea.

Benjamin Treynor-Sloss, VP, 24×7, Father of SRE

He realized that the underlying assumptions of this age old conflict were wrong and recast the problem into an entirely new notion — the error budget.

No system you’re likely to build (except maybe a pacemaker) needs to be available 100% of the time. Users have lots of interruptions they never notice because they’re too busy living their lives.

It therefore follows that for nearly all systems there’s a very small (but nonzero) acceptable quantity of unavailability. That downtime can be thought of as a budget. As long as a system is down less than its budget it is considered healthy.

For example, let’s say you need a system to be available 99.9% of the time (three nines). That means it’s OK for the system to be unavailable 0.1% of the time (for any given 30-day month, that’s 43 minutes).

As long as you don’t do anything that causes the system to be down more than 43 minutes you can develop and deploy to your heart’s content. Once you blow your budget, however, you need to spend 100% of your engineering time writing code that fixes the problem and generally makes your system more stable. The more stable you make things, the less likely you are to blow your error budget next month and the more new features you can build and deploy.

In short, the error budgets align the interests of the developer and operations tribes and create a virtuous circle.

From this, a new profession was born: Site Reliability Engineering (SRE).

At Google, there’s a basic agreement between SREs and developers.

The SREs will accept the responsibility for the uptime and healthy operation of a system if:
The system (as developed) can pass a strict inspection process — known as a Production Readiness Review (PRR)
The development team who built the system agrees to maintain critical support systems (like monitoring) and be active participants in key events like periodic reviews and postmortems
The system does not routinely blow its error budget

If the developers don’t maintain their responsibilities in the relationship then the SREs “offboard” the system. (And hand back the pagers!)

This basic relationship has helped create a culture of cooperation that has led to both incredible reliability and super fast innovation.

The Customer Reliability Engineering missionAt Google, we’ve decided we need a similar approach with our customers.

CRE is what you get when you take the principles and lessons of SRE and apply them towards customers.

The CRE team deeply inspects the key elements of a customer’s critical production application — code, design, implementation and operational procedures. We take what we find and put the application (and associated teams) through a strict Production Readiness Review (PRR).

At the end of that process we’ll tell you: “here are the reliability gaps in your system. Here is your error budget. If you want more nines here are the changes you should make.”

We’ll also build common system monitoring so that we can have mutually agreed upon telemetry for paging and tickets.

It’ll be a lot of hard work on your part to get past our PRR, but in exchange for the effort you can expect the following:
Shared paging. When your pagers go off, so will ours.
Auto-creation and escalation of Priority 1 tickets
CRE participation in customer war rooms (because despite everyone’s best efforts, bad things will inevitably happen)
A Google-reviewed design and production system

Additional Cost: $0

Wait . . . that’s a lot of value. Why aren’t we charging money for it?

The most important lever SREs have in Google is the ability to hand back the pagers. It’s the same thing with CREs. When a customer fails to keep up their end of the work with timely bug fixes, participation in joint postmortems, good operational hygiene etc., we’ll “hand back the pagers” too.

Please note, however, that $0 is not the same as “free.” Achieving Google-class operational rigor requires a sustained commitment on your part. It takes time and effort. We’ll be there on the journey, but you still need to walk the path. If you want some idea of what you’re signing up to, get a copy of the Site Reliability Engineering book and ask yourself how willing you are to do the things it outlines.

It’s fashionable for companies to tell their customers that “we’re in this together,” but they don’t usually act the part.

People who are truly “in it together” are accountable to one another and have mutual responsibilities. They work together as a team for a common goal and share a fate greater than the dollars that pass between them.

This program won’t be for everyone. In fact, we expect that the overwhelming majority of customers won’t participate because of the effort involved. We think big enterprises betting multi-billion dollars businesses on the cloud, however, would be foolish to pass this up. Think of it as a de-risking exercise with a price tag any CFO will love.

Lowering the anxiety with a new social contractOver the last few weeks we’ve been quietly talking to customers to gauge their interest in the CRE profession and our plans for it. Every time we do, there’s a visible sigh, a relaxing of the shoulders and the unmistakable expression of relief on people’s faces.

Just the idea that Google would invest in this way is lowering our customers’ anxiety.

This isn’t altruism, of course. It’s just good business. These principles and practices are a strong incentive for a customer to stay with Google. It’s an affinity built on human relations instead of technical lock-in.

By driving inherent reliability into your critical applications we also increase the practical reliability of our platform. That, in turn, lets us innovate faster (a thing we really like to do).

If you’re a cloud customer, this is the new social contract we think you deserve.

If you’re a service provider looking to expand and innovate your cloud practice, we’d like to work with you to bring these practices to scale.

If you’re another cloud provider, we hope you’ll join us in growing this new profession. It’s what all our customers truly need.
Quelle: Google Cloud Platform

Database collation support for Azure SQL Data Warehouse

We’re excited to announce you can now change the default database collation from the Azure portal when you create a new Azure SQL Data Warehouse database. This new capability makes it even easier to create a new database using one of the 3800 supported database collations for SQL Data Warehouse. Collations provide the locale, code page, sort order and character sensitivity rules for character-based data types. Once chosen, all columns and expressions requiring collation information inherit the chosen collation from the database setting. The default inheritance can be overridden by explicitly stating a different collation for a character-based data type. Changing collation To change the default collation, you simple update to the Collation field in the provisioning experience. For example, if you wanted to change the default collation to case sensitive, you would simply rename the Collation from SQL_Latin1_General_CP1_CI_AS to SQL_Latin1_General_CP1_CS_AS. Listing all supported collations To list all of the collations supported in Azure SQL Data Warehouse, you  can simply connect to the master database of your logical server and running the following command:SELECT * FROM sys.fn_helpcollations();
This will return all of the supported collations for Azure SQL Data Warehouse. You can learn more about the sys.fn_helpcollations function on MSDN.
Checking the current collation
To check the current collation for the database, you can run the following T-SQL snippet:SELECT DATABASEPROPERTYEX(DB_NAME(), ‘Collation’) AS Collation;
When passed ‘Collation’ as the property parameter, the DatabasePropertyEx function returns the current collation for the database specified. You can learn more about the DatabasePropertyEx function on MSDN.
Learn more
Check out the many resources for learning more about SQL Data Warehouse, including:

What is Azure SQL Data Warehouse?
SQL Data Warehouse best practices
Video library
MSDN forum
Stack Overflow forum
Quelle: Azure

Introducing the 2016 Future of Cloud Computing Survey – Join the cloud conversation

North Bridge, a leading venture capital firm, Wikibon, a worldwide community of practitioners, technologists and consultants dedicated to improving the technology adoption, have partnered to launch the sixth annual Future of Survey.

Microsoft participates in this survey regularly because your feedback on cloud computing is important to us and the industry. We want to hear about your plans for cloud, where it is making an impact across your organization, and what cloud technologies and capabilities you are prioritizing in your business.

We invite you to be among the first to TAKE THE SURVEY and share it with your network. By doing so you will help all of us in the industry get a better view on what customers are doing with cloud computing and identify emerging trends.

Results of the survey will be announced later this year and we will be back here to share the findings with you in November.

We look forward to hearing from you!
Quelle: Azure