Use Google Cloud Client Libraries to store files, save entities, and log data

By Omar Ayoub, Product Manager

To develop a cloud application, you usually need to access an online object storage, a scalable NoSQL database and a logging infrastructure. To that end, Google Cloud Platform (GCP) provides the Cloud Storage API, the Cloud Datastore API, and the Stackdriver Logging API. Better yet, you can now access those APIs via the latest Google Cloud Client Libraries which we’re proud to announce, are now Generally Available (GA) in seven server-side languages: C#, Go, Java, Node.js, PHP, Python and Ruby.

Online object storageFor your object storage needs, the Cloud Storage API enables you for instance to upload blobs of data, such as picture or movies, directly into buckets. To do so in Node.js for example, you first need to install the Cloud Client Library:

npm install –save @google-cloud/storage
and then simply run the following code to upload a local file into a specific bucket:

const Storage = require(‘@google-cloud/storage’);

// Instantiates a client
const storage = Storage();

// References an existing bucket, e.g. “my-bucket”
const bucket = storage.bucket(bucketName);

// Upload a local file to the bucket, e.g. “./local/path/to/file.txt”
return bucket.upload(fileName)
.then((results) => {
const file = results[0];
console.log(`File ${file.name} uploaded`);
});

NoSQL DatabaseWith Cloud Datastore, one of our NoSQL offerings, you can create entities, which are structured objects, and save them in GCP so that they can be retrieved or queried by your application at a later time. Here’s an example in Java, where you specify the maven dependency in the following manner:

com.google.cloud
google-cloud-datastore
1.0.0
followed by executing this code to create a task entity:

// Imports the Google Cloud Client Library
import com.google.cloud.datastore.Datastore;
import com.google.cloud.datastore.DatastoreOptions;
import com.google.cloud.datastore.Entity;
import com.google.cloud.datastore.Key;

public class QuickstartSample {
public static void main(String… args) throws Exception {

// Instantiates a client
Datastore datastore = DatastoreOptions.getDefaultInstance().getService();

// The kind for the new entity
String kind = “Task”;

// The name/ID for the new entity
String name = “sampletask1″;

// The Cloud Datastore key for the new entity
Key taskKey = datastore.newKeyFactory().setKind(kind).newKey(name);

// Prepares the new entity
Entity task = Entity.newBuilder(taskKey)
.set(“description”, “Buy milk”)
.build();

// Saves the entity
datastore.put(task);
}
}
Logging frameworkOur libraries also allow you to send log data and events very easily to the Stackdriver Logging API. As a Python developer for instance, the first step is to install the Cloud Client Library for Logging:

pip install –upgrade google-cloud-logging
Then add the following code to your project (e.g. your __init__.py file):

import logging
import google.cloud.logging
client = google.cloud.logging.Client()
# Attaches a Google Stackdriver logging handler to the root logger
client.setup_logging(logging.INFO)
Then, just use the standard Python logging module to directly report logs to Stackdriver Logging:

import logging
logging.error(‘This is an error’)
We encourage you to visit the client libraries page for Cloud Storage, Cloud Datastore and Stackdriver Logging to learn more on how to get started programmatically with these APIs across all of the supported languages. To see the full list of APIs covered by the Cloud Client Libraries, or to give us feedback, you can also visit the respective GitHub repositories in the Google Cloud Platform organization.
Quelle: Google Cloud Platform

Using modern data sources in Azure Analysis Services

Just weeks after reaching general availability (GA) with Azure Analysis Services, we are super excited to make Tabular 1400 models available in public preview, as announced in the blog post 1400 Compatibility Level in Azure Analysis Services. Although Tabular 1400 is still a preview feature, it is nevertheless exciting because cloud solutions can now begin to take advantage of all the great features that Analysis Services supports at the 1400 compatibility level, including Detail Rows, Object Level Security, and the modern Get Data experience. For a comprehensive summary, see the blog article 1400 Compatibility Level in Azure Analysis Services.

As far as the modern Get Data experience is concerned, note that there are still some limitations because SQL Server Data Tools for Analysis Services Tabular (SSDT Tabular) as well as some cloud infrastructure component, specifically the on-premises gateway, are not quite ready yet. With every monthly release, SSDT Tabular closes more feature gaps and supports more data sources, but the work is far from complete. The cloud infrastructure components to access modern on-premises data sources are also still in the final stages of testing. So, at this point, only the following cloud-based data sources can be used at the 1400 compatibility level in Azure Analysis Services:

Azure SQL Database: If your business applications rely on Azure SQL DB, you can use Azure Analysis Services connect to the data and add BI capabilities to your solutions.
Azure SQL Data Warehouse: This massively parallel processing (MPP) cloud-based, scale-out, relational database can provide a foundation for large scale BI solutions based on Azure Analysis Services. Just connect your Tabular 1400 models to Azure SQL DW in import or DirectQuery mode for interactive analysis.
Azure Blobs storage: If you want to build large scale Tabular 1400 models on top of unstructured data, you need a scalable storage solution. With exabytes of capacity, massive scalability, and low cost, Azure Blob storage is a good choice. Note, however, that SSDT Tabular does not yet support advanced mashup capabilities to import file-based data efficiently. For example, combining files in a single table requires support for mashup functions, which is coming soon as part of named expressions.

The next big delivery in SSDT Tabular is support for named expressions. This includes parameters, functions, and shared queries, so that you can build advanced mashups, taking full advantage of Azure Blobs as mentioned above. Then the tools focus shifts to improving quality, robustness, and performance, all while continuing to add further connectors until parity with Power BI Desktop is achieved. Among other connectors, HDInsight and Azure Data Lake Store are coming up next to increase the number of supported cloud-based data sources.

For on-premises data sources, the plan is to provide connectivity at the 1400 compatibility level in Azure Analysis Services very soon. This requires a new version of the on-premises gateway, which is planned to ship in parallel with the next monthly release of SSDT Tabular. If you want to create Tabular 1400 models that use on-premises data sources and deploy them in Azure Analysis Services, make sure you use that upcoming SSDT Tabular version and you will also have to deploy that upcoming on-premises gateway.

In the meantime, you can build and test your Tabular 1400 models by using the SSDT Tabular 17.0 (April 2017) release in integrated workspace mode. Give Tabular 1400 a test drive, and as always, please send us your feedback and suggestions by using ProBIToolsFeedback or SSASPrev at Microsoft.com. You can also use any other available communication channels such as UserVoice or MSDN forums. Stay tuned for further announcements when the next monthly release of SSDT Tabular is published together with the on-premises gateway for Azure Analysis Services.
Quelle: Azure

Mobiltelefone: Smartphone-Reparaturen in Köln am günstigsten

Ein Online-Reparatur-Anbieter für Smartphones hat die Statistiken von über 400 Werkstätten in Deutschland ausgewertet – und dabei unter anderem starke Unterschiede bei den Reparaturpreisen festgestellt. Am günstigsten sind diese in Köln, in Hamburg müssen Kunden durchschnittlich am meisten zahlen. (Smartphone, Studie)
Quelle: Golem