Niederlande verabschieden umstrittenes "Abhörgesetz"
Sicherheitsbehörden in den Niederlanden dürfen bald einen großen Teil des Internetverkehrs über Kabel überwachen und Daten bis zu drei Jahre lang speichern.
Quelle: Heise Tech News
Sicherheitsbehörden in den Niederlanden dürfen bald einen großen Teil des Internetverkehrs über Kabel überwachen und Daten bis zu drei Jahre lang speichern.
Quelle: Heise Tech News
This post series provides the latest updates and news for Visual Studio Team Services and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure.
In this month’s update, we begin with an industry first by scaling Git beyond what anyone else thought possible. After discussing more improvements in Git, we’ve got a brand new built-in wiki now in public preview. We also have improvements in build, release, package management, and work item tracking. There’s a lot of new stuff, so let’s dive in.
World’s largest Git repo is on VSTS: 3.5M files and 300 GB!
As part of a big effort at Microsoft to modernize engineering systems, which we call One Engineering System (1ES), we set an audacious goal of moving the Windows code base into Git a few years ago. We tried a couple of different approaches, including submodules, before deciding that the best approach was to scale Git by virtualizing the repo. This spring we accomplished our goal when the Windows team moved the entire Windows code base into a single Git repo hosted on VS Team Services. With nearly 4,000 engineers working in a single Git repo called “OS,” Windows is in a single version control repository for the first time in decades. To achieve this, we created Git Virtual File System (GVFS) which we’ve also released as an open source project so that anyone can use it with VSTS.
The scale that Windows development operates at is really amazing. Let’s look at some numbers.
There are over 250,000 reachable Git commits in the history for this repo, over the past 4 months.
8,421 pushes per day (on average)
2,500 pull requests, with 6,600 reviewers per work day (on average)
4,352 active topic branches
1,760 official builds per day
We’ve already significantly improved performance from the first release of GVFS. Along the way, we’ve also made performance and scale improvements in Git, and we are contributing those to the Git project. Any account on VSTS can use GVFS, so feel free to try it out.
Since we’re talking about Git, let’s take a look at the improvements we’ve made in the experience.
Collapsible pull request comments
Reviewing code is a critical part of the pull request experience, so we’ve added new features to make it easier for reviewers to focus on the code. Code reviewers can easily hide comments to get them out of the way when reviewing new code for the first time.
Hiding comments hides them from the tree view and collapses the comment threads in the file view:
When comments are collapsed, they can be expanded easily by clicking the icon in the margin, and then collapsed again with another click. Tooltips make it easy to peek at a comment without seeing the entire thread.
Improved workflow when approving pull requests with suggestions
Using the auto-complete option with pull requests is a great way to improve your productivity, but it shouldn’t cut short any active discussions with code reviewers. To better facilitate those discussions, the Approve with suggestions vote will now prompt when a pull request is set to complete automatically. The user will have the option to cancel the auto-complete so that their feedback can be read, or keep the auto-complete set and allow the pull request to be completed automatically when all policies are fulfilled.
Filter tree view in Code
Now you don’t need to scroll through all the files that a commit may have modified to just get to your files. The tree view on commit details, pull requests, shelveset details, and changeset details page now supports file and folder filtering. This is a smart filter that shows child files of a folder when you filter by folder name and shows a collapsed tree view of a file to show the file hierarchy when you filter by file name.
Find a file or folder filter on commit tree:
Git tags
Our Git tags experience in the VSTS web UI continues to evolve quickly. In addition to improvements to viewing, you can also delete, filter, and set security on tags.
View tags
You can view all the tags on your repository on the Tags page. If you manage all your tags as releases, then a user can visit the tags page to get a bird’s-eye view of all the product releases.
You can easily differentiate between a lightweight and an annotated tag here, as annotated tags show the tagger and the creation date alongside the associated commit, while lightweight tags only show the commit information.
Delete tags
Sometimes you need to delete a tag from your remote repo. It could be due to a typo in the tag name, or you you might have tagged the wrong commit. You can delete tags from the web UI by clicking the context menu of a tag on the Tags page and selecting Delete tag.
Filtering tags
The number of tags can grow significantly with time. Some repositories may have tags created in hierarchies, which can make finding tags difficult.
If you are unable to find the tag that you were looking for on the tag page, then you can simply search for the tag name using the filter on top of the Tags page.
Tags security
Now you can grant granular permissions to users of the repo to manage tags. You can give users the permission to delete tags or manage tags.
New Wiki experience in public preview
For quite a while we’ve wanted to have a built-in wiki. I’m happy to announce that each project now has its own wiki. Help your team members and other users understand, use, and contribute to your project. Learn more about it in our announcement blog post and check out the docs. Oh, and one more thing. It fully supports emoji, so have fun with it!
Building with the latest Visual Studio
We’re changing the model for handling different versions of Visual Studio. Due to architectural, storage, and performance limitations, we’re no longer going to offer multiple versions of Visual Studio on a single hosted build machine. For details on the history and rationale for these changes, see Visual Studio Team Services VS Hosted Pools.
In this release you’ll see the following changes:
You must now explicitly select a queue when you create a build definition (no default).
To make it easier, we’re moving the default queue to the Tasks tab, in the Process section.
The Visual Studio Build and MSBuild tasks now default to the Latest setting for the version argument.
Coming soon you’ll see more changes. For example, the following hosted pools (and corresponding queues) will be:
Hosted VS2017
Hosted VS2015
Hosted Deprecated (previously called “Hosted Pool”)
Hosted Linux Preview
Chef: Infrastructure as code
Chef is now available in the Visual Studio Team Services Marketplace! If you’re not familiar with Chef, they offer an infrastructure automation platform with a slick custom development kit allowing you to “turn your infrastructure into code.” In their words, “Infrastructure described as code is flexible, versionable, human-readable, and testable.” The Chef team wrote their own extensive blog post about this release, and I encourage you to check that out as well.
The Chef extension adds six new Build & Release tasks for configuring Chef Automate.
The tasks in this extension automate the common activities you perform when interacting with the Chef Automate platform. For a detailed look at setup and configuration, check out the getting started guide on GitHub. The tasks included in the extension typically used as part of the build process are:
Update cookbook version number: Allows you to take your current build number and set the version of a Chef cookbook with that version prior to uploading.
Upload cookbook to Chef Server: Allows you to specify a path containing a cookbook from within your repo, and have it uploaded to your Chef Server, along with all prerequisites if you have specified them.
The tasks typically used as part of your Release process are:
Add variables to Chef Environment: Using this task allows you to copy a set of VSTS Release Management variables for your Environment, over to a specified Chef environment.
Release cookbook version to environment: This task allows you to specify a version ‘pin’ for a Chef cookbook in a particular environment. You can use this task in a Release Pipeline to ‘release’ cookbooks to that environment.
Execute InSpec: Execute InSpec on machines in a Deployment Group.
Execute Chef Client: Execute Chef Client on machines in a Deployment Group.
We are happy to have Chef join the Team Services extension ecosystem, so take your infrastructure to the next level and give them a shot.
Control releases to an environment based on the source branch
A release definition can be configured to trigger a deployment automatically when a new release is created, typically after a build of the source succeeds. However, you may want to deploy only builds from specific branches of the source, rather than when any build succeeds.
For example, you may want all builds to be deployed to Dev and Test environments, but only specific builds deployed to Production. Previously you were required to maintain two release pipelines for this purpose, one for the Dev and Test environments and another for the Production environment.
Release Management now supports the use of artifact filters for each environment. This means you can specify the releases that will be deployed to each environment when the deployment trigger conditions, such as a build succeeding and creating a new release, are met. In the Trigger section of the environment Deployment conditions dialog, select the artifact conditions such as the source branch and tags for builds that will trigger a new deployment to that environment.
In addition, the Release Summary page now contains a pop-up tip that indicates the reason for all “not started” deployments to be in that state, and suggests how or when the deployment will start.
Release Triggers for Git repositories as an artifact source
Release Management now supports configuring a continuous deployment trigger for Git repositories linked to a release definition in any of the team projects in the same account. This lets you trigger a release automatically when a new commit is made to the repository. You can also specify a branch in the Git repository for which commits will trigger a release. This also means that you can link GitHub and Team Foundation Git repositories as artifact sources to a release definition, and then trigger releases automatically for applications such as Node.js and PHP that are not generated from a build.
On-demand triggering of automated tests
The Test hub now supports triggering automated test cases from test plans and test suites. Running automated tests from the Test hub can be set up similarly to the way you run tests in a scheduled fashion in Release Environments. You will need to setup an environment in the release definition using the Run automated tests from test plans template and associate it with the test plan to run the automated tests. See the documentation for step by step guidance on how to set up environments and run automated tests from the Test hub.
Securely store files like Apple certificates
We’ve added a general-purpose secure files library to the Build and Release features. Use the secure files library to store files such as signing certificates, Apple Provisioning Profiles, Android Keystore files, and SSH keys on the server without having to commit them to your source repository.
The contents of secure files are encrypted and can only be used during build or release processes by referencing them from a task. Secure files are available across multiple build and release definitions in the team project based on security settings. Secure files follow the Library security model.
We’ve also added some Apple tasks that leverage this new feature:
Utility: Install Apple Certificate
Utility: Install Apple Provisioning Profile
Consume secrets from an Azure Key Vault as variables
We have also added first-class support for integrating with Azure Key Vault by linking variable groups to Key Vault secrets. This means you can manage secret values completely within Azure Key Vault without changing anything in VSTS (for example, rotate passwords or certificates in Azure Key Vault without affecting release).
To enable this feature in the Variable Groups page, use the toggle button Link secrets from an Azure key vault as variables. After configuring the vault details, choose +Add and select the specific secrets from your vault that are to be mapped to this variable group.
After you have created a variable group mapped to Azure Key Vault, you can link it to your release definitions, as documented in Variable groups.
Note that it’s just the secret names that are mapped to the variable group variables, not the values. The actual values (the latest version) of each secret will be used during the release.
Package build task updates
We’ve made comprehensive updates to the NuGet, npm, Maven, and dotnet build tasks, including fixes to most of the issues logged in the vsts-tasks repo on GitHub.
New unified NuGet task
We’ve combined the NuGet Restore, NuGet Packager, and NuGet Publisher task into a unified NuGet build task to align better with the rest of the build task library; the new task uses NuGet 4.0.0 by default. Accordingly, we’ve deprecated the old tasks, and we recommend moving to the new NuGet task as you have time. This change coincides with a wave of improvements outlined below that you’ll only be able to access by using the combined task.
As part of this work, we’ve also released a new NuGet Tool Installer task that controls the version of NuGet available on the PATH and used by the new NuGet task. So, to use a newer version of NuGet, just add a NuGet Tool Installer task at the beginning of your build.
npm build task updates
Whether you’re building your npm project on Windows, Linux, or Mac the new NPM build task will accommodate. We have also reorganized the task to make both npm install and npm publish easier. For install and publish, we have simplified credential acquisition so that credentials for registries listed in your project’s .npmrc file can be safely stored in a service endpoint. Alternatively, if you’re using a VSTS feed, we have a picker that will let you select a feed, and then we will generate a .npmrc with requisite credentials that are used by the build agent.
Working outside your account/collection
It’s now easier to work with feeds outside your VSTS account, whether they’re Package Management feeds in another VSTS account or TFS server, or non-Package Management feeds like NuGet.org/npmjs.com, Artifactory, or MyGet. Dedicated Service Endpoint types for NuGet, npm, and Maven make it easy to enter the correct credentials and enable the build tasks to work seamlessly across package download and package push operations.
Maven and dotnet now support authenticated feeds
Unlike NuGet and npm, the Maven and dotnet build tasks did not previously work with authenticated feeds. We’ve added all the same goodness outlined above (feed picker, working outside your account improvements) to the Maven and dotnet tasks so you can work easily with VSTS/TFS and external feeds/repositories and have a consistent experience across all the package types supported by Package Management.
Mobile work item form general availability
The mobile experience for work items in Visual Studio Team Services is now out of preview! We have a full end-to-end experience that includes an optimized look and feel for work items and provides an easy way to interact with items that are assigned to you, that you’re following, or that you have visited or edited recently from your phone.
Extension of the month: Product Plan
Communicating the big picture helps align everyone to the team goals and empowers more people to notice when something may not be lining up. So I am happy to announce that our partners at ProductPlan have brought their roadmap solution to the VSTS Marketplace.
ProductPlan provides an easy way to plan and communicate your product strategy. Get started with a 30-day free trial.
Easily drag and drop bars, milestones, containers, and lanes to build beautiful roadmaps in minutes.
Update your plans on-the-fly.
Securely share with individuals, your whole team, or the entire company – for free. Easily print and export to a PDF, image, or spreadsheet.
Use the Planning Board to score your initiatives objectively.
Capture future opportunities in a central location with the Parking Lot.
Expand lanes and containers to tailor the amount of detail you share.
View multiple roadmaps in a Master Plan to understand your entire product portfolio at a glance.
As always, there’s even more in our sprintly release announcements. Check out the June 1st and June 22nd announcements for the full list of features. Be sure to subscribe to the DevOps blog to keep up with the latest plans and developments for VSTS.
Happy coding!
Quelle: Azure
This blog post was authored by the Microsoft Cognitive Services Team.
Microsoft Cognitive Services enables developers to augment the next generation of applications with the ability to see, hear, speak, understand, and interpret needs using natural methods of communication.
Today, we are excited to announce several service updates:
We are launching Bing Entity Search API, a new service available in Free Preview which makes it easy for developers to build experiences that leverage the power of Bing knowledge graph with more engaging contextual experiences. Tap into the power of the web to search for the most relevant entities such as movies, books, famous people, and US local businesses, and easily provide primary details and information sources about them.
Microsoft Cognitive Services Lab’s Project Prague is now available. Project Prague lets you control and interact with devices using gestures to have a more intuitive and natural experience.
Presentation Translator, a Microsoft Garage project, is now available for download. It provides presenters the ability to add subtitles to their presentations in real time, in the same language for accessibility scenarios or in another language for multi-language situations. With customized speech recognition, presenters have the option to customize the speech recognition engine (English or Chinese) using the vocabulary within the slides and slide notes to adapt to jargon, technical terms, product, place names, etc. Presentation Translator is powered by the Microsoft Translator live feature, built on the Translator APIs of Microsoft Cognitive Services.
Let’s take a closer look at what these new APIs and services can do for you.
Bring rich knowledge of people, places, things and local businesses to your apps with Bing Entity Search API
As announced today, Bing Entity Search API is a new addition in our already existing set of Microsoft Cognitive Services Search APIs, including Bing Web Search, Image Search, Video Search, News Search, Bing Autosuggest, and Bing Custom Search. This API lets you search for entities in the Bing knowledge graph and retrieve the most relevant entities and primary details and information sources about them. This API also supports searching for local businesses in the US. It helps developers easily build apps that harness the power of the web and delight users with more engaging contextual experiences.
Get started
To get started today, let’s get a free preview subscription key on the Try Cognitive Services webpage.
After getting the key, I can start sending entity search queries to Bing. It’s as simple as sending the following query:
GET https://api.cognitive.microsoft.com/bing/v7.0/entities?q=mount+rainier HTTP/1.1
Ocp-Apim-Subscription-Key: 123456789ABCDE
X-Search-ClientIP: 999.999.999.999
X-Search-Location: lat:47.60357;long:-122.3295;re:100
Host: api.cognitive.microsoft.com
The request must specify the q query parameter, which contains the user's search term, and the Ocp-Apim-Subscription-Key header. For location aware queries like restaurants near me, it’s important to also include the X-Search-Location and X-MSEdge-ClientIP headers.
For more information about getting started, see the documentation page Making your first entities request.
The response
The following shows the response to the Mount Rainier query.
{
"_type" : "SearchResponse",
"queryContext" : {
"originalQuery" : "mount rainier"
},
"entities" : {
"queryScenario" : "DominantEntity",
"value" : [{
"contractualRules" : [{
"_type" : "ContractualRules/LicenseAttribution",
"targetPropertyName" : "description",
"mustBeCloseToContent" : true,
"license" : {
"name" : "CC-BY-SA",
"url" : "http://creativecommons.org/licenses/by-sa/3.0/"
},
"licenseNotice" : "Text under CC-BY-SA license"
},
{
"_type" : "ContractualRules/LinkAttribution",
"targetPropertyName" : "description",
"mustBeCloseToContent" : true,
"text" : "en.wikipedia.org",
"url" : "http://en.wikipedia.org/wiki/Mount_Rainier"
},
{
"_type" : "ContractualRules/MediaAttribution",
"targetPropertyName" : "image",
"mustBeCloseToContent" : true,
"url" : "http://en.wikipedia.org/wiki/Mount_Rainier"
}],
"webSearchUrl" : "https://www.bing.com/search?q=Mount%20Rainier…",
"name" : "Mount Rainier",
"image" : {
"name" : "Mount Rainier",
"thumbnailUrl" : "https://www.bing.com/th?id=A21890c0e1f…",
"provider" : [{
"_type" : "Organization",
"url" : "http://en.wikipedia.org/wiki/Mount_Rainier"
}],
"hostPageUrl" : "http://upload.wikimedia.org/wikipedia…",
"width" : 110,
"height" : 110
},
"description" : "Mount Rainier, Mount Tacoma, or Mount Tahoma is the highest…",
"entityPresentationInfo" : {
"entityScenario" : "DominantEntity",
"entityTypeHints" : ["Attraction"],
"entityTypeDisplayHint" : "Mountain"
},
"bingId" : "9ae3e6ca-81ea-6fa1-ffa0-42e1d78906"
}]
}
}
For more information about consuming the response, please refer to the documentation page Searching the Web for entities and places.
Try it now
Don’t hesitate to try it by yourself by going to the Entities Search API Testing Console.
Create more natural user experiences with gestures – Project Prague
Project Prague is a cutting edge, easy-to-use SDK that helps developers and UX designers incorporate gesture-based controls into their apps. It enables you to quickly define and implement customized hand gestures, creating a more natural user experience.
The SDK enables you to define your desired hand poses using simple constraints built with plain language. Once a gesture is defined and registered in your code, you will get a notification when your user does the gesture, and can select an action to assign in response.
Using Project Prague, you can enable your users to intuitively control videos, bookmark webpages, play music, send emojis, or summon a digital assistant.
Let’s say that I want to create new gesture to control my app "RotateRight”. First, I need to ensure that I have the hardware and software requirements. Please refer to the requirement section for more information. Intuitively, when performing the "RotateRight" gesture, a user would expect some object in the foreground application to be rotated right by 90°. We have used this gesture to trigger the rotation of an image in a PowerPoint slideshow in the video above.
The following code demonstrates one possible way to define the "RotateRight" gesture:
var rotateSet = new HandPose("RotateSet", new FingerPose(new[] { Finger.Thumb, Finger.Index }, FingerFlexion.Open, PoseDirection.Forward),
new FingertipPlacementRelation(Finger.Index, RelativePlacement.Above, Finger.Thumb),
new FingertipDistanceRelation(Finger.Index, RelativeDistance.NotTouching, Finger.Thumb));
var rotateGo = new HandPose("RotateGo", new FingerPose(new[] { Finger.Thumb, Finger.Index }, FingerFlexion.Open, PoseDirection.Forward),
new FingertipPlacementRelation(Finger.Index, RelativePlacement.Right, Finger.Thumb),
new FingertipDistanceRelation(Finger.Index, RelativeDistance.NotTouching, Finger.Thumb));
var rotateRight = new Gesture("RotateRight", rotateSet, rotateGo);
The "RotateRight" gesture is a sequence of two hand poses, "RotateSet" and "RotateGo". Both poses require the thumb and index to be open, pointing forward, and not touching each other. The difference between the poses is that "RotateSet" specifies that the index finger should be above the thumb and "RotateGo" specifies it should be right of the thumb. The transition between "RotateSet" and "RotateRight", therefore, corresponds to a rotation of the hand to the right.
Note that the middle, ring, and pinky fingers do not participate in the definition of the "RotateRight" gesture. This makes sense because we do not wish to constrain the state of these fingers in any way. In other words, these fingers are free to assume any pose during the execution of the "RotateRight" gesture.
Having defined the gesture, I need to hook up the event indicating gesture detection to the appropriate handler in your target application:
rotateRight.Triggered += (sender, args) => { /* This is called when the user performs the "RotateRight" gesture */ };
The detection itself is performed in the Microsoft.Gestures.Service.exe process. This is the process associated with the "Microsoft Gestures Service" window discussed above. This process runs in the background and acts as a service for gesture detection. I will need to create a GesturesServiceEndpoint instance in order to communicate with this service. The following code snippet instantiates a GesturesServiceEndpoint and registers the "RotateRight" gesture for detection:
var gesturesService = GesturesServiceEndpointFactory.Create();
await gesturesService.ConnectAsync();
await gesturesService.RegisterGesture(rotateRight);
When you wish to stop the detection of the "RotateRight" gesture, you can unregister it as follows:+
C#Copy
await gesturesService.UnregisterGesture(rotateRight);
The handler will no longer be triggered when the user executes the "RotateRight" gesture. When finished working with gestures, keep in mind I should dispose the GesturesServiceEndpoint object:
gesturesService?.Dispose();
Please note that in order for the above code to compile, you will need to reference the following assemblies, located in the directory indicated by the MicrosoftGesturesInstallDir environment variable:
Microsoft.Gestures.dll
Microsoft.Gestures.Endpoint.dll
Microsoft.Gestures.Protocol.dll
For more information about the Getting Started guide, please refer to the documentation.
Thank you again and happy coding!
Quelle: Azure
Deutsche Glasfaser bietet seit dieser Woche 1 GBit/s. Im Upload wären 900 MBit/s verfügbar, aber es können nur 500 MBit/s genutzt werden, weil die Hardware nicht mehr hergibt. (Glasfaser, Internet)
Quelle: Golem
Das Modelabel Louis Vuitton veröffentlicht seine erste Smartwatch. Die Tambour Horizon setzt auf Android Wear 2.0 und ist ab 2450 US-Dollar erhältlich.
Quelle: Heise Tech News
The FTC recently went after 47 celebrities and brands for violating its rules on sponsored Instagrams. But many of them weren’t even actually ads.
We tend to think of Instagram ads as those really obvious ones for diet teas or teeth whiteners. But this list shows there’s a much broader definition of an ad, at least according to the FTC, which considers any “material relationship” with a product to be a brand.
This could be that you are getting paid to post it, or that you got free merch, or you’re a part owner of a brand or have some other financial stake. There's a lot of gray area.
BuzzFeed attempted to fact-check these by reaching out to the brands to ask if the celebrity was actually paid or got a freebie. What we found is that there were lots of different kinds of ads — sometimes the celeb was part owner of a brand, or got free stuff. Or maybe it was an ad, but they didn't disclose it the right way — either they made no attempt to disclose it at all, or they tried but didn't get it quite right.
This just goes to show that if the FTC can't tell from looking at an Instagram if something is an ad or not – and if when a media outlet called up the brand to ask we still couldn't find an answer – how the heck are normal people supposed to know when something is an ad??
Quelle: <a href="Not Even The FTC Knows What Exactly #Spon Looks Like“>BuzzFeed
By Greg Brown, Director, DevOps, Loot Crate
[Editor’s note: Gamers and superfans know Loot Crate, which delivers boxes of themed swag to 650,000 subscribers every month. Loot Crate built its back-end on Heroku, but for its next venture — Sports Crate — the company decided to containerize its Rails app with Google Container Engine, and added continuous deployment with Jenkins. Read on to learn how they did it.]
Founded in 2012, Loot Crate is the worldwide leader in fan subscription boxes, partnering with entertainment, gaming and pop culture creators to deliver monthly themed crates, produce interactive experiences and digital content and film original video productions. In our first five years, we’ve delivered over 14 million crates to fans in 35 territories across the globe.
In early 2017 we were tasked with launching an offering to Major League Baseball fans called Sports Crate. There were only a couple of months until the 2017 MLB season started on April 2nd, so we needed the site to be up and capturing emails from interested parties as fast as possible. Other items on our wish list included the ability to scale the site as traffic increased, automated zero-downtime deployments, effective secret management and to reap the benefits of Docker images. Our other Loot Crate properties are built on Heroku, but for Sports Crate, we decided to try Container Engine, which we suspected would allow our app to scale better during peak traffic, manage our resources using a single Google login and better manage our costs.
Continuous deployment with JenkinsOur goal was to be able to successfully deploy an application to Container Engine with a simple git push command. We created an auto-scaling, dual-zone Kubernetes cluster on Container Engine, and tackled how to do automated deployments to the cluster. After a lot of research and a conversation with Google Cloud Solutions Architect Vic Iglesias, we decided to go with Jenkins Multibranch Pipelines. We followed this guide on continuous deployment on Kubernetes and soon had a working Jenkins deployment running in our cluster ready to handle deploys.
Our next task was to create a Dockerfile of our Rails app to deploy to Container Engine. To speed up build time, we created our own base image with Ruby and our gems already installed, as well as a rake task to precompile assets and upload them to Google Cloud Storage when Jenkins builds the Docker image.
Dockerfile in hand, we set up the Jenkins Pipeline to build the Docker image, push it to Google Container Registry and deploy Kubernetes and its services to our environment. We put a Jenkinsfile in our GitHub repo that uses a switch statement based on the GitHub branch name to choose which Kubernetes namespace to deploy to. (We have three QA environments, a staging environment and production environment).
The Jenkinsfile checks out our code from GitHub, builds the Docker image, pushes the image to Container Registry, runs a Kubernetes job that performs any database migrations (checking for success or failure) and runs tests. It then deploys the updated Docker image to Container Engine and reports the status of the deploy to Slack. The entire process takes under 3 minutes.
Improving secret management in the local development environmentNext, we focused on making local development easier and more secure. We do our development locally, and with our Heroku-based applications, we deploy using environment variables that we add in the Heroku config or in the UI. That means that anyone with the Heroku login and permission can see them. For Sports Crate, we wanted to make the environment variables more secure; we put them in a Kubernetes secret that the applications can easily consume, which also keeps the secrets out of the codebase and off developer laptops.
The local development environment consumes those environmental variables using a railtie that goes out to Kubernetes, retrieves the secrets for the development environment, parses them and puts them into the Rails environment. This allows our developers to “cd” into a repo and run “rails server” or “rails console” with the Kubernetes secrets pulled down before the app starts.
TLS termination and load balancingAnother requirement was to set up effective TLS termination and load balancing. We used a Kubernetes Ingress resource with an Nginx Ingress Controller, whose automatic HTTP-to-HTTPS redirect functionality isn’t available from Google Cloud Platform’s (GCP) Ingress controller. Once we had the Ingress resource configured with our certificate and our Nginx Ingress controller running behind a service with a static IP, we were able to get to our application from the outside world. Things were starting to come together!
Auto-scaling and monitoringWith all of the basic pieces of our infrastructure on GCP in place, we looked towards auto-scaling, monitoring and educating our QA team on deployment practices and logging. For pod auto-scaling, we implemented a Kubernetes Horizontal Pod Autoscaler on our deployment. This checks CPU utilization and scales the pods up if we start getting a lot of traffic to our app. For monitoring, we implemented Datadog’s Kubernetes Agent and set up metrics to check for any critical issues, and send alerts to PagerDuty. We use StackDriver for logging and educated our team on how to use the StackDriver Logging console to properly drill down to the app, namespace and pod for which they wanted information.
Net-netWith launch day around the corner, we ran load tests on our new app and were amazed at how well it handled large amounts of traffic. The pods auto-scaled exactly as we needed them to and our QA team fell in love with continuous deployment with Jenkins Multibranch Pipelines. All told, Container Engine met all of our requirements, and we were up and running within a month.
Our next project is to move our other monolithic Rails apps off of Heroku and onto Container Engine as decoupled microservices that can take advantage of the newest Kubernetes features. We look forward to improving on what has already been an extremely powerful tool.
Quelle: Google Cloud Platform
Hörgeräte-Träger kennen das, besonders an Bahnhöfen gehen wegen vieler Störgeräusche Ansagen unter. Auf der Frequenz 2,4 GHz sollen sie künftig über ihre Zugverbindung informiert werden.
Quelle: Heise Tech News
By Aparna Sinha, Group Product Manager, Container Engine
Just over a week ago Google led the most recent open source release of Kubernetes 1.7, and today, that version is available on Container Engine, Google Cloud Platform’s (GCP) managed container service. Container Engine is one of the first commercial Kubernetes offerings running the latest 1.7 release, and includes differentiated features for enterprise security, extensibility, hybrid networking and developer efficiency. Let’s take a look at what’s new in Container Engine.
Enterprise security
Container Engine is designed with enterprise security in mind. By default, Container Engine clusters run a minimal, Google curated Container-Optimized OS (COS) to ensure you don’t have to worry about OS vulnerabilities. On top of that, a team of Google Site Reliability Engineers continuously monitor and manage the Container Engine clusters, so you don’t have to. Now, Container Engine adds several new security enhancements:
Starting with this release, kubelet will only have access to the objects it needs to know. The Node authorizer beta restricts each kubelet’s API access to resources (such as secrets) belonging to its scheduled pods. This feature increases the protection of a cluster from a compromised/untrusted node.
Network isolation can be an important extra boundary for sensitive workloads. The Kubernetes NetworkPolicy API allows users to control which pods can communicate with each other, providing defense-in-depth and improving secure multi-tenancy. Policy enforcement can now be enabled in alpha clusters.
HTTP re-encryption through Google Cloud Load Balancing (GCLB) allows customers to use HTTPS from the GCLB to their service backends. This is an often requested feature that gives customers the peace of mind knowing that their data is fully encrypted in-transit even after it enters Google’s global network.
Together the above features improve workload isolation within a cluster, which is a frequently requested security feature in Kubernetes. Node Authorizer and NetworkPolicy can be combined with the existing RBAC control in Container Engine to improve the foundations of multi-tenancy:
Network isolation between Pods (network policy)
Resource isolation between Nodes (node authorizer)
Centralized control over cluster resources (RBAC)
Enterprise and hybrid networks
Perhaps the most awaited features by our enterprise users are networking support for hybrid cloud and VPN with Container Engine. New in this release:
GA Support for all private IP (RFC-1918) addresses, allowing users to create clusters and access resources in all private IP ranges and extending the ability to use Container Engine clusters with existing networks.
Exposing services by internal load balancing is beta, allowing Kubernetes and non-Kubernetes services to access one another on a private network1.
Source IP preservation is now generally available and allows applications to be fully aware of client IP addresses for services exposed through Kubernetes
Enterprise extensibilityAs more enterprises use Container Engine, we’re making a major investment to improve extensibility. We heard feedback that customers want to offer custom Kubernetes-style APIs in their clusters.
API Aggregation, launching today in beta on Container Engine, enables you to extend the Kubernetes API with custom APIs. For example, you can now add existing API solutions such as service catalog, or build your own in the future.
Users also want to incorporate custom business logic and third-party solutions into their Container Engine clusters. So we’re introducing Dynamic Admission Control in alpha clusters, providing two ways to add business logic to your cluster:
Initializers can modify Kubernetes objects as they are created. For example, you can use an initializer to add Istio capability to a Container Engine alpha cluster, by injecting an Istio sidecar container in every Pod deployed.
Webhooks enable you to validate enterprise policy. For example, you can verify that containers being deployed pass your enterprise security audits.
As part of our plans to improve extensibility for enterprises, we’re replacing the Third Party Resource (TPR) API with the improved Custom Resource Definition (CRD) API. CRDs are a lightweight way to store structured metadata in Kubernetes, which make it easy to interact with custom controllers via kubectl. If you use the TPR beta feature, please plan to migrate to CRD before upgrading to the 1.8 release.
Workload diversity
Container Engine now enhances your ability to run stateful workloads like databases and key value stores, such as ZooKeeper, with a new automated application update capability. You can:
Select from a range of StatefulSet update strategies beta, including rolling updates
Optimize roll-out speed with parallel or ordered pod provisioning, particularly useful for applications such as Kafka.
A popular workload on Google Cloud and Container Engine is training machine learning models for better predictive analytics. Many of you have requested GPUs to speed up training time, so we’ve updated Container Engine to support NVIDIA K80 GPUs in alpha clusters for experimentation with this exciting feature. We’ll support additional GPUs in the future.
Developer efficiency
When developers don’t have to worry about infrastructure, they can spend more time building applications. Kubernetes provides building blocks to de-couple infrastructure and application management, and Container Engine builds on that foundation with best-in-class automation features.
We’ve automated large parts of maintaining the health of the cluster, with auto-repair and auto-upgrade of nodes.
Auto-repair beta keeps your cluster healthy by proactively monitoring for unhealthy nodes and repairs them automatically without developer involvement.
In this release, Container Engine’s auto-upgrade beta capability incorporates Pod Disruption Budgets at the node layer, making upgrades to infrastructure and application controllers predictable and safer.
Container Engine also offers cluster- and pod-level auto-scaling so applications can respond to user demand without manual intervention. This release introduces several GCP-optimized enhancements to cluster autoscaling:
Support for scaling node pools to 0 or 1, for when you don’t need capacity
Price-based expander for auto-scaling in the most cost-effective way
Balanced scale-out of similar node groups, useful for clusters that span multiple zones
The combination of auto-repair, auto-upgrades and cluster autoscaling in Container Engine enables application developers to deploy and scale their apps without being cluster admins.
We’ve also updated the Container Engine UI to assist in debugging and troubleshooting by including detailed workload-related views. For each workload, we show the type (DaemonSet, Deployment, StatefulSet, etc.), running status, namespace and cluster. You can also debug each pod and view annotations, labels, the number of replicas and status, etc. All views are cross-cluster so if you’re using multiple clusters, these views allow you to focus on your workloads, no matter where they run. In addition, we also include load balancing and configuration views with deep links to GCP networking, storage and compute. This new UI will be rolling out in the coming week.
Container Engine everywhere
Google Cloud is enabling a shift in enterprise computing: from local to global, from days to seconds, and from proprietary to open. The benefits of this model are becoming clear and exemplified by Container Engine, which saw more than 10x growth last year.
To keep up with demand, we’re expanding our global capacity with new Container Engine clusters in our latest GCP regions:
Sydney (australia-southeast1)
Singapore (asia-southeast1)
Oregon (us-west1)
London (europe-west2)
These new regions join the half dozen others from Iowa to Belgium to Taiwan where Container Engine clusters are already up and running.
This blog post highlighted some of the new features available in Container Engine. You can find the complete list of new features in the Container Engine release notes.
The rapid adoption of Container Engine and its technology is translating into real customer impact. Here are a few recent stories that highlight the benefits companies are seeing:
BQ, one of the leading technology companies in Europe that designs and develops consumer electronics, was able to scale quickly from 15 to 350 services while reducing its cloud hosting costs by approximately 60% through better utilization and use of Preemptible VMs on Container Engine. Read the full story here.
Meetup, the social media networking platform, switched from a monolithic application in on-premises data centers to an agile microservices architecture in a multi-cloud environment with the help of Container Engine. This gave its engineering teams autonomy to work on features and develop roadmaps that are independent from other teams, translating into faster release schedules, greater creativity and new functionality. Read the case study here.
Loot Crate, a leader in fan subscription boxes, launched a new offering on Container Engine to quickly get their Rails app production ready and able to scale with demand and zero downtime deployments. Read how it built its continuous deployment pipeline with Jenkins in this post.
At Google Cloud we’re really proud of our compute infrastructure, but what really makes it valuable is the services that run on top. Google creates game-changing services on top of world-class infrastructure and tooling. With Kubernetes and Container Engine, Google Cloud makes these innovations available to developers everywhere.
GCP is the first cloud offering a fully managed way to try the newest Kubernetes release, and with our generous 12-month free trial of $300 credits, there’s no excuse to not try it today.
Thanks for your feedback and support. Keep the conversation going and connect with us on the Container Engine Slack channel.
1 Support for accessing Internal Load Balancers over Cloud VPN is currently in alpha; customers can apply for access here.
Quelle: Google Cloud Platform
Since the Moby Project introduction at DockerCon 2017 in Austin last April, the Moby Community has been hard at work to further define the Moby project, improve its components (runC, containerd, LinuxKit, InfraKit, SwarmKit, Libnetwork and Notary) and fine processes and clear communication channels.
All project maintainers are developing these aspects in the open with the support of the community. Contributors are getting involved on GitHub, giving feedback on the Moby Project Discourse forum and asking questions on Slack. Special Interest Groups (SIGs) for the Moby Project components have been formed based on the Kubernetes model for Open Source collaboration. These SIGs ensure a high level of transparency and synchronization between project maintainers and a community of heterogeneous contributors.
In addition to these online channels and meetings, the Moby community hosts regular meetups and summits. Check out the videos and slides from the last DockerCon Moby May Summit and June Moby Summit to catch up on the latest project updates. The Moby Summit page on the Moby website contains the agenda and registration link for next Moby summit, as well as recaps of previous summit.
The next Moby Summit will take place on September 14, 2017 in Los Angeles as part of the Open Source Summit North America. Following the success of the previous editions, we’ll keep the same format which consists of short technical talks / demos in the morning and Birds-of-a-Feather in the afternoon. We’re actively looking for people who can talk about their Moby Project use cases. Don’t hesitate to reach out to community@mobyproject.org if you’d like to give a talk or would like to cover a specific topic during the BoF sessions, or contribute to the agenda by sending a pull request to the Moby website repository.
Register for Moby Summit LA
Learn more about the Moby Project:
Visit www.mobyproject.org
Join the #Moby-project channel on Slack
Check out the upcoming events in the Moby Community Calendar
Join the conversation on GitHub and Discourse
Attending #OSS17 in LA next September ? Join us at the @moby Summit on 9/14 #mobyprojectClick To Tweet
The post Moby Summit alongside Open Source Summit North America appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/