California Lawmakers Asked Tesla To Loosen Its Confidentiality Agreement With Factory Workers

Tesla vehicles are being assembled by robots at Tesla Motors Inc factory in Fremont, California, U.S. on July 25, 2016. REUTERS/Joseph White/File Photo GLOBAL BUSINESS WEEK AHEAD PACKAGE Ð SEARCH ÒBUSINESS WEEK AHEAD 3 OCTÓ FOR ALL IMAGES

Jim Tanner / Reuters

California lawmakers sent a letter to Tesla last month asking the company to loosen its employee confidentiality agreement. Signed by five state assembly members, the letter expressed concerns that Tesla’s policy might violate state and federal labor laws by preventing workers from communicating about wages and working conditions.

“We are concerned that over-broad language in the confidentiality agreement violates these provisions and has resulted in a chilling effect on workers&; ability to engage in protected activity,” the assembly members’ letter to Tesla, dated Jan. 10, reads. “As we are confident that this was not your intention, we respectfully request that Tesla revise this policy to protect employee rights and comply with the law, and immediately communicate this clarification to all workers.”

The letter comes as comes amid ongoing unionization talk by employees at Tesla’s Fremont auto factory. Employees have been communicating with United Auto Workers union officials for nearly a year, according to reports. United Auto Workers did not immediately respond to a request for comment.

In a Jan. 17 response to the assembly members, Tesla’s general counsel said the company reminded employees about their confidentiality agreement after “a rash of unauthorized leaks to the press and social media.” Tesla included an “acknowledgement” in the letter to employees that “unless otherwise allowed by law,” workers would be held to the confidentiality contract. Todd Maron, Tesla’s general counsel, said the National Labor Relations Act would fall into that category.

“Rather than overwhelm them with a complicated legal document that is incomprehensible to lay people, we set out to use plain language, writing in a brief, plain-spoken manner that is respectful of the legal rights of our employees and fully compliant with state and federal laws,” Maron wrote. “Note that the Acknowledgement is clearly not intended to prohibit employees from discussing concerns about wages or working conditions whether amongst themselves or with third parties.”

Here’s what the acknowledgement said, according to Maron’s letter: “Unless otherwise allowed by law…you must not, for example, discuss confidential information with anyone outside of Tesla, take or post photos or make video or audio recordings inside Tesla facilities, forward work emails outside of Tesla or to a personal email account, or write about your work in any social media, blog, or book.”

On Thursday, a man claiming to work in Tesla’s Fremont factory — where the company is gearing up to begin production on the $35,000 Model 3 — published a Medium post called “Time for Tesla to Listen.” In it, he wrote that he and other workers had begun conversations with United Auto Workers about unionizing, “but at the same time, management actions are feeding workers’ fears about speaking out.”

“I often feel like I am working for a company of the future under working conditions of the past,” Jose Moran wrote. “Most of my 5,000-plus coworkers work well over 40 hours a week, including excessive mandatory overtime…We need better organization in the plant, and I, along with many of my coworkers, believe we can achieve that by coming together and forming a union.”

In a statement provided to BuzzFeed News, Tesla said, “As California’s largest manufacturing employer and a company that has created thousands of quality jobs here in the Bay Area, this is not the first time we have been the target of a professional union organizing effort such as this. The safety and job satisfaction of our employees here at Tesla has always been extremely important to us. We have a long history of engaging directly with our employees on the issues that matter to them, and we will continue to do so because it’s the right thing to do.”

Quelle: <a href="California Lawmakers Asked Tesla To Loosen Its Confidentiality Agreement With Factory Workers“>BuzzFeed

AWS Server Migration Service is now available in new regions

AWS Server Migration Service (SMS) is now available to customers in the AWS US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Seoul), and Asia Pacific (Mumbai) regions. AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations. 
Quelle: aws.amazon.com

AWS Service Catalog Introduces Support for YAML-formatted AWS CloudFormation Templates

You can now create Service Catalog Products using YAML-formatted AWS CloudFormation templates (YAML Version 1.1) in addition to JSON-formatted templates. This allows template creators to choose the format they feel most comfortable working in to describe AWS infrastructure. YAML-formatted CloudFormation templates follow the same anatomy as existing JSON-formatted templates.
Quelle: aws.amazon.com

Amazon WorkSpaces now supports interforest trusts with AWS Microsoft AD for easier user and directory management

Amazon WorkSpaces now allows you to integrate with your on-premises Microsoft Active Directory using an interforest trust with the AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also called AWS Microsoft AD. By establishing a single interforest trust relationship, you can assign Amazon WorkSpaces for users in any of your on-premises domains. AWS Microsoft AD automatically discovers and routes authentication requests to the correct domain controller, which means that your users can use their existing Microsoft Active Directory credentials to log in to their WorkSpaces, without having to specify their domain name.
Quelle: aws.amazon.com

Azure Backup and Azure Site Recovery now available in UK

We’re pleased to announce Azure Backup and Azure Site Recovery now available in the UK.

Azure Backup – The Azure-based service you can use to back up (or protect) and restore your data in the Microsoft cloud. Azure Backup enables Azure IaaS VM backup as well as replace your existing on-premises or off-site backup solution with a cloud-based solution that is reliable, secure, and cost-competitive. Learn more about Azure Backup.

Azure Site Recovery – Contributes to your BCDR strategy by orchestrating replication of on-premises virtual machines and physical servers. You replicate servers and VMs from your primary on-premises datacenter to the cloud (Azure), or to a secondary datacenter. Learn more about Azure Site Recovery.

We are excited about these new Azure Services, and invite customers using these Azure regions to try them today!
Quelle: Azure

Bletchley – The Cryptlet Fabric & Evolution of blockchain Smart Contracts

Anatomy of a Smart Contract

The concept of a Smart Contract has been around for awhile and is largely attributed to Nick Szabo’s work in the late 1990s. However, it remained an abstract concept until the summer of 2015 with the Frontier release of Ethereum as its first implementation. The promise of Smart Contracts is sprawling and has gotten the attention of every industry as a revolutionary disrupter that can change the way business is done forever. That remains to be seen, but like most first implementations of significantly important technology, there are some early lessons learned and some introspection about how improvements can be made.

I have written a paper that describes at a high level how Smart Contracts are implemented today and how they can be refactored to significantly improve their performance, security, scalability, manageability, versioning, and reuse in the near future. This paper describes the thought process and historical context for a new architectural approach that focuses on separation of concerns and implementing a 3 Layered/Tiered Smart Contract architecture.

To understand the context and exactly what a “3-Layered & Tiered” Smart Contract architecture means please give this paper a read, Anatomy of a Smart Contract.

If you want the short and sweet answer it is this: Smart Contracts designed for semi-trusted enterprise consortium networks should be separated into 3 main layers:

Data Layer – The definition of data schema and only the data logic for validation of inserts (appends) and optimization of reads. In platforms like Ethereum or Chain, languages like Solidity and Ivy can be used at this layer. This is similar to how relational databases use the SQL language and stored procedures.
Business Layer – All business logic for Smart Contracts and surface level APIs for interacting with Smart Contracts from the Presentation layer (UI) or other external applications. Cryptlets written in any language targeting the runtimes supported by the Cryptlet Fabric. (.NET, .NET Core, JVM, native)
Presentation Layer – User Interface platforms and other applications built on using the exposed APIs by Cryptlets.

These Layers can then be deployed, optimized, and scaled in their respective tiers: Presentation, Middle (Business), and Data tiers.

*Note, this approach is not generally valid for trustless implementations of Smart Contracts, but targeted at enterprise consortium blockchains.

The Cryptlet Fabric – Update

As we get closer to the first public preview of the Cryptlet Fabric, I though it was a good time to quickly highlight what blockchain enthusiasts, professionals and developers should be looking forward to. The Cryptlet Fabric is designed to be the middle tier in the new proposed 3-Layered & Tiered Smart Contract architecture, so it will deliver things you would expect from such an implementation.  It manages scale, failover, caching, monitoring, management…a long list of features, but what is new:

Cryptographic primitives, secure execution enclaves and a runtime secure key (secrets) platform that allows for the creation, persistence and dynamic reanimation for performing cryptographically secure operations with these keys at scale.

This allows Cryptlets to create, store, and use key pairs in a secure execution environment to do all sorts of “blockchain-ey” things like digital signatures, zero knowledge proofs, ring signatures, threshold signatures, and even homomorphic encryption all within secure enclaves. Cryptlets will come in 2 basic types, which I went into some detail in the reading Cryptlets in Depth, utility and contract.

The Cryptlet Fabric provides a blockchain router layer, abstracting away different transaction messages and cryptographic signing from the code implemented in Cryptlets, as well as integration with existing systems. For example, a Cryptlet’s output only contains the information to be sent to the blockchain, like market data or business logic results, and is packaged into the actual blockchain specific transaction by the Cryptlet Fabric below it by the blockchain router and blockchain platform specific providers.  This encapsulation technique has been used for decades allowing technologies like TCP/IP to work on all sorts of networks (LAN, WAN, Internet and Mobile) and all kinds of applications. This is what allows for Cryptlets to be reused across different types of blockchains.

Utility Cryptlets (oracles)

Ok, I admit that being a Microsoft guy makes it hard for me to call anything an oracle without associating it with the company. The fact of the matter is that Utility Cryptlets can largely be thought of as a more scalable, standard, secure, and discoverable way to write blockchain oracles. The majority of requests that we get for Cryptlets are initially in this category, providing a secure attested data source for blockchain applications. You will able to write cryptlet oracles using the language of your choice targeting (initially) .NET, .NET Core, JVM and bare metal runtimes, so C#, Java, C++, F#, VB, etc… Your cryptlet oracle can publish any type of data you want, including:

Market prices
Package delivery notifications
Workflow next steps
Weather alerts
Wounterparty updates like credit downgrade

and define your subscription parameters like:

Time and date
Event driven triggers from the blockchain or listening to external sources
Conditional – if “this” and “that” are true, evaluation combinations if, else if, switches, etc.
Combination – if the time is 4PM EST & the NYSE was open today then send me the LIBOR and the price of x

As a developer or provider of cryptlet oracles you can publish them as libraries into the Azure Marketplace for discovery and acquiring customers. We expect to see a robust catalog of Cryptlet libraries from our partners big and small.

However, there is more to Utility Cryptlets than just providing data. You can use them to expose a secure channel into your blockchain based applications for other applications and user interfaces. An ERP or CRM system can read data from the blockchain without having to know any of the details, and they can also securely fire data and events into the blockchain to ease integration of blockchains into existing enterprise infrastructure. Cryptlets can also provide services to blockchain applications and analytics platforms by triggering events into existing systems by watching blockchain transactions and issuing an index request or data pull.

I’m sure that customers and partners will find many uses for Utility Cryptlets, but the first and most obvious is for blockchain oracles.

Contract Cryptlets

These Cryptlets are key to fully realizing the refactoring of Smart Contracts into a 3-Layered/Tiered architecture. These Cryptlets differ from Utility Cryptlets in many ways. Utility Cryptlets can have many instances at once, each handling many different subscriptions at the same time, whereas Contract Cryptlets are single, paired, or ring instanced and bound to a specific Smart Contract. Contract Cryptlets contain the business logic, rules, and external API for interaction for executing a Smart Contract between counterparties.  In a private, semi-trusted consortium blockchain network where there are strong identities or trust between counterparties, the Smart Contract logic does not need to exist “on-chain” or be executed by every node on the network. Separation of concerns between the data layer in the blockchain and the business and presentation above, isolates code between layers abstracting implementations for reuse and optimization.

Contract Cryptlets are themselves digitally signed and run within an enclave that attests that the code they contain ran unaltered without tampering in private.  This allows for the code in the Cryptlet to be private between counterparties and even be encrypted to protect intellectual property.  Counterparties can write and review their own Contract Cryptlets and all inspect and verify the code before compiling and packaging for implementation, or they can choose from certified Contract Cryptlets that a vendor or reputation stands behind. 

Contract Cryptlets can be instantiated for each counterparty participating in a specific Smart Contract instance.  These Cryptlet Pairs and Rings are optional, but allows for each Contract Cryptlet to hold secrets like private keys for signing or encryption belonging to that counterparty, avoiding co-mingling secrets of counterparties in the same address space.  This capability allows for more advanced transaction scenarios, like encrypting results, to be stored on the blockchain using threshold or ring encryption schemes.

Contract Cryptlets can use private keys and secrets it has permissions to that are fetched from the fabric’s “secret” services.  These secrets are controlled completely by the counterparty via Azure Key Vault, and remain inaccessible to other participants, including Microsoft.

This approach allows code defining a Smart Contract’s business and integration logic to be run on scaled up resources and collocated near data for maximum performance.  The data that they persist to the blockchain is still subject to the data logic “on-chain” and is reconciled on the network through its consensus process.  The digital signatures from the Cryptlet, enclave and blockchain provider are used for validation and attestation, and can be stored along with each transaction on the blockchain as proofs.

Contract Cryptlets also use Utility Cryptlet services, however they can interact directly within the fabric via a subscription or direct API access.

Using Cryptlets allows for a three layered architecture where:

The Smart Contract on the blockchain defines schema and data logic, which is deployed to the blockchain network representing the Data Tier.
Contract Cryptlets define business logic and a Surface Level API for User Interfaces and external applications deployed to the Cryptlet Fabric representing the Middle Tier.
Utility Cryptlets provide reusable data sources and events also in the Middle Tier.
Presentation tier communicates with the Surface Level API of the Cryptlet Fabric using standard UI technologies or integration platforms deployed to web servers, mobile devices, service bus, etc. for the Presentation Tier.

This architectural approach provides abstraction of the blockchain implementation from its clients, as well as tuning and scaling the blockchain and Cryptlet Fabric independently.

Cryptlet Fabric Diagram

Here is an updated diagram of the Cryptlet Fabric.  We will cover its parts in more detail in subsequent posts and white papers, which will also be included in its releases.

Quelle: Azure

Automating Azure Analysis Services processing with Azure Functions

In this post, we’ll walk through a simple example on how you can setup Azure Functions to process Azure Analysis Services tables.

Azure functions are perfect for running small pieces of code in the cloud. To learn more about Azure Functions, see Azure Functions Overview and Azure Functions pricing.

Create an Azure Function

To get started, we first need to create an Azure Function

1. Go to the portal and create a new Function App.

2. Type a unique Name for your new function app, choose the Resource Group and Location. For the Hosting plan use App Service Plan.

Note: As the duration of processing Analysis Services tables and models may vary, use a Basic or Standard App Service Plan and make sure that the Always On setting is turned on, otherwise the Function may time out if the processing takes longer.

Click Create to deploy the Function App.

3. In the Quickstart tab, click Timer and C#, and then click Create this function.

Configure timer settings

Now that we’ve created our new function, we need to configure some settings. First, let’s configure a schedule.

1. Go to the Integrate > Timer > Schedule.

The default schedule has a CRON expression for every 5 minutes.

Change this to any setting you would like. In the example below I used an expression to trigger the function at 3AM every day. Click Documentation to see a description and some examples for CRON expressions.

2. Click Save.

Configure application settings

Before we begin writing our code, we need to configure the application.

Important: Make sure you have the latest data providers installed on your computers. To get more info and download, see Data providers for connecting to Azure Analysis Services.

After installing the providers, you’ll need these two files in the next step:

C:Program FilesMicrosoft SQL Server0SDKAssembliesMicrosoft.AnalysisServices.Core.dll
C:Program FilesMicrosoft SQL Server0SDKAssembliesMicrosoft.AnalysisServices.Tabular.dll

1. In the new Azure Function, go to Function app settings > Go to Kudu, to open the debug console.

2. Navigate to the function in D:homesitewwwrootyourfunctionname, and then create a folder named bin.

3. Navigate to the newly created bin folder, and then drop the two files specified in the previews point. It should look like this:

4. Refresh your browser. In Develop > bin, you should see the two files in it (If you don’t see the file structure, click View files).

5. Before we write our code, we need to create a connection string. In Function app settings, click Configure app settings.

6. Scroll to the end of the Application settings view, to the Connection strings section, and then create a Custom connection string.

Provider=MSOLAP;Data Source=asazure://region.asazure.windows.net/servername; Initial Catalog=dbname;User ID=user@domain.com;Password=pw

Add code

Now that we have our function’s configuration settings in-place, we can enter the code. You’ll need to reference the DLLs we uploaded, but other than that it looks like any other .Net code.

Note: In this example, I included some commented lines to only process a table or the model.

"Microsoft.AnalysisServices.Tabular.DLL"

r "Microsoft.AnalysisServices.Core.DLL"

r "System.Configuration"

using System;

using System.Configuration;

using Microsoft.AnalysisServices.Tabular;

public static void Run(TimerInfo myTimer, TraceWriter log)

{

    log.Info($"C# Timer trigger function started at: {DateTime.Now}");  

    try

            {

                Microsoft.AnalysisServices.Tabular.Server asSrv = new Microsoft.AnalysisServices.Tabular.Server();

                var connStr = ConfigurationManager.ConnectionStrings["AzureASConnString"].ConnectionString;

                asSrv.Connect(connStr);

                Database db = asSrv.Databases["AWInternetSales2"];

                Model m = db.Model;

                //db.Model.RequestRefresh(RefreshType.Full);     // Mark the model for refresh

                //m.RequestRefresh(RefreshType.Full);     // Mark the model for refresh

                m.Tables["Date"].RequestRefresh(RefreshType.Full);     // Mark only one table for refresh

                db.Model.SaveChanges();     //commit  which will execute the refresh

                asSrv.Disconnect();

            }

            catch (Exception e)

            {

                log.Info($"C# Timer trigger function exception: {e.ToString()}");

            }

    log.Info($"C# Timer trigger function finished at: {DateTime.Now}"); 

}

 

Click Save to save the changes, and then click Run to test the code. You’ll get an output window where you will be able to see the log information and exceptions.

Learn more on Azure Analysis Services and Azure Functions.
Quelle: Azure

Azure IoT Hub Connector to Cassandra now available

As part of Microsoft’s ongoing commitment to open source and interoperability in IoT, I’m pleased to announce the release of a new open source project to enable devices that are connected to Azure IoT Hub to store data in a Cassandra database. The code can be found on GitHub.

With this new Cassandra connector, developers can easily build solutions that harness IoT-scale fleets of devices and store data from them in Cassandra tables for later analysis. The library can also be fully customized, if needed. Developers can define the schema of Cassandra tables and specify how data should be stored, whether based on the type of the message or by splitting the incoming data into multiple tables for easier analysis. In addition to the Cassandra connector, we’ve also released a Docker container to make the deployment and testing of the new library a matter of few minutes.

We invite you to follow us on GitHub as we release more libraries, samples and demo applications in the coming months.
Quelle: Azure