New Azure Government Documentation

We are happy to announce Azure Government documentation! This documentation provides guidance that is tailored to our Azure Government customers. We highlight common solutions and guidance on building government specific implementations, as well as information that you need to know about using services, Marketplace, Portal and PowerShell in Azure Government. This is just the start! Many more updates are planned as we continually expand our offerings for Azure Government. As new services come online, we will update the corresponding documentation. Over the coming months we will add additional content on how to build solutions and onboard successfully. We encourage and welcome feedback and documentation requests you have. Please comment on this blog post with any questions, recommendations, or comments in relation to our new documentation site. Accessing the Documentation The content can be accessed via three easy ways: Navigate to the new Azure Government Landing Page through the secondary navigation bar.      2. Access the page directly at Azure Government documentation.      3. Select “Azure Government” from the Microsoft Documentation Articles cloud filter drop down menu. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed or receive emails by clicking “Subscribe by Email!” on the Azure Government Blog.
Quelle: Azure

Tail Kubernetes with Stern

Editor’s note: today’s post is by Antti Kupila, Software Engineer, at Wercker, about building a tool to tail multiple pods and containers on Kubernetes.We love Kubernetes here at Wercker and build all our infrastructure on top of it. When deploying anything you need to have good visibility to what’s going on and logs are a first view into the inner workings of your application. Good old tail -f has been around for a long time and Kubernetes has this too, built right into kubectl.I should say that tail is by no means the tool to use for debugging issues but instead you should feed the logs into a more persistent place, such as Elasticsearch. However, there’s still a place for tail where you need to quickly debug something or perhaps you don’t have persistent logging set up yet (such as when developing an app in Minikube).Multiple PodsKubernetes has the concept of Replication Controllers which ensure that n pods are running at the same time. This allows rolling updates and redundancy. Considering they’re quite easy to set up there’s really no reason not to do so.However now there are multiple pods running and they all have a unique id. One issue here is that you’ll need to know the exact pod id (kubectl get pods) but that changes every time a pod is created so you’ll need to do this every time. Another consideration is the fact that Kubernetes load balances the traffic so you won’t know at which pod the request ends up at. If you’re tailing pod A but the traffic ends up at pod B you’ll miss what happened.Let’s say we have a pod called service with 3 replicas. Here’s what that would look like:$ kubectl get pods                         # get pods to find pod ids$ kubectl log -f service-1786497219-2rbt1  # pod 1$ kubectl log -f service-1786497219-8kfbp  # pod 2$ kubectl log -f service-1786497219-lttxd  # pod 3Multiple containersWe’re heavy users gRPC for internal services and expose the gRPC endpoints over REST using gRPC Gateway. Typically we have server and gateway living as two containers in the same pod (same binary that sets the mode by a cli flag). The gateway talks to the server in the same pod and both ports are exposed to Kubernetes. For internal services we can talk directly to the gRPC endpoint while our website communicates using standard REST to the gateway.This poses a problem though; not only do we now have multiple pods but we also have multiple containers within the pod. When this is the case the built-in logging of kubectl requires you to specify which containers you want logs from.If we have 3 replicas of a pod and 2 containers in the pod you’ll need 6 kubectl log -f <pod id> <container id>. We work with big monitors but this quickly gets out of hand…If our service pod has a server and gateway container we’d be looking at something like this:$ kubectl get pods                                 # get pods to find pod ids$ kubectl describe pod service-1786497219-2rbt1    # get containers in pod$ kubectl log -f service-1786497219-2rbt1 server   # pod 1$ kubectl log -f service-1786497219-2rbt1 gateway  # pod 1$ kubectl log -f service-1786497219-8kfbp server   # pod 2$ kubectl log -f service-1786497219-8kfbp gateway  # pod 2$ kubectl log -f service-1786497219-lttxd server   # pod 3$ kubectl log -f service-1786497219-lttxd gateway  # pod 3SternTo get around this we built Stern. It’s a super simple utility that allows you to specify both the pod id and the container id as regular expressions. Any match will be followed and the output is multiplexed together, prefixed with the pod and container id, and color-coded for human consumption (colors are stripped if piping to a file).Here’s how the service example would look:$ stern serviceThis will match any pod containing the word service and listen to all containers within it. If you only want to see traffic to the server container you could do stern –container server service and it’ll stream the logs of all the server containers from the 3 pods.The output would look something like this:$ stern service+ service-1786497219-2rbt1 › server+ service-1786497219-2rbt1 › gateway+ service-1786497219-8kfbp › server+ service-1786497219-8kfbp › gateway+ service-1786497219-lttxd › server+ service-1786497219-lttxd › gatewayservice-1786497219-8kfbp server Log message from serverservice-1786497219-2rbt1 gateway Log message from gatewayservice-1786497219-8kfbp gateway Log message from gatewayservice-1786497219-lttxd gateway Log message from gatewayservice-1786497219-lttxd server Log message from serverservice-1786497219-2rbt1 server Log message from serverIn addition, if a pod is killed and recreated during a deployment Stern will stop listening to the old pod and automatically hook into the new one. There’s no more need to figure out what the id of that newly created pod is.Configuration optionsStern was deliberately designed to be minimal so there’s not much to it. However, there are still a couple configuration options we can highlight here. They’re very similar to the ones built into kubectl so if you’re familiar with that you should feel right at home.–timestamps adds the timestamp to each line–since shows log entries since a certain time (for instance –since 15min)–kube-config allows you to specify another Kubernetes config. Defaults to ~/.kube/config–namespace allows you to only limit the search to a certain namespaceRun stern –help for all options.ExamplesTail the gateway container running inside of the envvars pod on staging     stern –context staging –container gateway envvarsShow auth activity from 15min ago with timestamps     stern -t –since 15m authFollow the development of some-new-feature in minikube     stern –context minikube some-new-featureView pods from another namespace     stern –namespace kube-system kubernetes-dashboardGet SternStern is open source and available on GitHub, we’d love your contributions or ideas. If you don’t want to build from source you can also download a precompiled binary from GitHub releases. 
Quelle: kubernetes

5 Tales from the Docker Crypt

(Cue the Halloween music)
Welcome to my crypt. This is the crypt keeper speaking and I’ll be your spirit guide on your journey through the dangerous and frightening world of IT applications. Today you will learn about 5 spooky application stories covering everything from cobweb covered legacy processes to shattered CI/CD pipelines. As these stories unfold, you will hear  how Docker helped banish cost, complexity and chaos.
Tale 1 &; “Demo Demons”
Splunk was on a mission to enable their employees and partners across the globe to deliver demos of their software regardless of where they’re located in the world, and have each demo function consistently. These business critical demos include everything from Splunk security, to web analytics and IT service intelligence. This vision proved to be quite complex to execute. At times their SEs would be in customer meetings, but their demos would sometimes fail. They needed to ensure that each of their 30 production demos within their Splunk Oxygen demo platform could live forever in eternal greatness.
To ensure their demos were working smoothly with their customers, Splunk uses Docker Datacenter, our on-premises solution that brings container management and deployment services to the enterprise via an integrated platform. Images are stored within the on-premises Docker Trusted Registry and are connected  to their Active Directory server so that users have the correct role-based access to the images. These images are publicly accessible to people who are authenticated but are outside of the corporate firewall. Their sales engineers can now pull the images from DTR and give the demo offline ensuring that anyone who goes out and represents the Splunk brand, can demo without demise.
Tale 2 &8211; “Monster Maintenance”
Cornell University&;s IT team was spending too many resources taking care of r their installation of Confluence. Their team spent 1,770 hours maintaining applications over a six month period and were in need of utilizing immutable infrastructure that could be easily torn down once processes were complete. Portability across their application lifecycle, which included everything from development, to production, was also a challenge.
With a Docker Datacenter (DDC) commercial subscription from Docker, they now host their Docker images in a central location, allowing multiple organizations to access them securely. Docker Trusted Registry provides high availability via DTR replicas, ensuring that their dockerized apps are continuously available, even if a node fails. With Docker, they experience a 10X reduction in maintenance time. Additionally, he portability of Docker containers helps their workloads move across multiple environments, streamlining their application development, and deployment processes. The team is now able to deploy applications 13X faster than in the past by leveraging reusable architecture patterns and simplified build and deployment processes.
Tale 3 &8211; “Managing Menacing Monoliths and Microservices!”
SA Home Loans, a mortgage firm located in South Africa was experiencing slow application deployment speeds. It took them 2 weeks just to get their newly developed applications over to their testing environment, slowing innovation. These issues extended to production as well. Their main home loan servicing software, a mixture of monolithic Windows services and IIS applications, was complex and difficult to update,placing a strain on the business. Even scarier was that when they deployed new features or fixes, they didn’t have an easy or reliable roll back plan if something went wrong (no blue/green deployment). In addition, their company decided to adopt a microservices architecture. They soon realized that upon completion of this project they’d have over 50 separate services across their Dockerized nodes in production! Orchestration now presented itself as a new challenge.
To solve their issues, SA Home Loans trusts in Docker Datacenter. SA Home Loans can now deploy apps 30 times more often! The solution also provides the production-ready container orchestration solution that they were looking for. Since DDC has embedded swarm within it, it shares the Docker engine APIs, and is one less complex thing to learn. The Docker Datacenter solution provides ease of use and familiar frontend for the ops team.
 
Tale 4 &8211; “Unearthly Labor”
USDA’s legacy website platform consisted of seven manually managed monolithic application servers that implemented technologies using traditional labor-intensive techniques that required expensive resources. Their systems administrators had to SSH into individual systems deploying updates and configuration one-by-one. USDA discovered that this approach lacked the flexibility and scalability to provide the services necessary for supporting their large number of diverse apps built with PHP, Ruby, and Java – namely Drupal, Jekyll, and Jira. A different approach would be required to fulfill the shared platform goals of USDA.
USDA now uses Docker and has expedited their project and modernized their entire development process. In just 5 weeks. they launched four government websites on their new dockerized  platform to production. Later, an additional four websites were launched including one for the first Lady, Michelle Obama, without any  additional hardware costs. By using Docker, the USDA saved  upwards of $150,000 in technology infrastructure costs alone. Because they could leverage a shared infrastructure model, they were also able to reduce  labor costs as well. Using Docker provided the USDA with the  agility needed  to develop, test, secure, and even deploy modern software in a high-security federal government datacenter environment.
Tale 5 &8211; “An Apparition of CI/CD”
Healthdirect dubbed their original applications development process &;anti CI/CD&; as it was broken, and difficult to create a secure end-to-end CI/CD pipeline. They had a CI/CD process for the infrastructure team, but were unable to repeat the process across multiple business units. The team wanted repeatability but lacked the ability to deploy their apps and provide 100% hands-off automation. .
Today Healthdirect is using Docker Datacenter. Now their developers are empowered in the release process and the code developed locally ships to production without changes. With Docker, Healthdirect was able to  innovate faster and deploy their applications to production, with ease.
So there they are. 5 spooky tales for you on this Halloween day.To learn more about Docker Datacenter check out this demo.
Now, be gone from my crypt. It’s time for me to retire back to my coffin.
Oh and one more thing….Happy Halloween!!
For more resources:

Hear from Docker customers
Learn more about Docker Datacenter
Sign up for your 30 day free evaluation of Docker Datacenter

 

5 spooky Tales from the Docker Crypt  To Tweet

The post 5 Tales from the Docker Crypt appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Blog posts last week

With OpenStack Summit last week, we have a lot of summit-focused blog posts today, and expect more to come in the next few days.

Attending OpenStack Summit Ocata by Julien Danjou

For the last time in 2016, I flew out to the OpenStack Summit in Barcelona, where I had the chance to meet (again) a lot of my fellow OpenStack contributors there.

Read more at http://tm3.org/bu

OpenStack Summit, Barcelona, 2 of n by rbowen

Tuesday, the first day of the main event, was, as always, very busy. I spent most of the day working the Red Hat booth. We started at 10 setting up, and the mob came in around 10:45.

Read more at http://tm3.org/bx

OpenStack Summit, Barcelona, 1 of n by rbowen

I have the best intentions of blogging every day of an event. But every day is always so full, from morning until the time I drop into bed exhausted.

Read more at http://tm3.org/by

TripleO composable/custom roles by Steve Hardy

This is a follow-up to my previous post outlining the new composable services interfaces , which covered the basics of the new for Newton composable services model.

Read more at http://tm3.org/bo

Integrating Red Hat OpenStack 9 Cinder Service With Multiple External Red Hat Ceph Storage Clusters by Keith Schincke

This post describes how to manually integrate Red Hat OpenStack 9 (RHOSP9) Cinder service with multiple pre-existing external Red Hat Ceph Storage 2 (RHCS2) clusters. The final configuration goals are to have Cinder configuration with multiple storage backends and support …

Read more at http://tm3.org/bz

On communities: Sometimes it’s better to over-communicate by Flavio Percoco

Communities, regardless of their size, rely mainly on the communication there is between their members to operate. The existing processes, the current discussions, and the future growth depend heavily on how well the communication throughout the community has been established. The channels used for these conversations play a critical role in the health of the communication (and the community) as well.

Read more at http://tm3.org/c0

Full Stack Automation with Ansible and OpenStack by Marcos Garcia – Principal Technical Marketing Manager

Ansible offers great flexibility. Because of this the community has figured out many useful ways to leverage Ansible modules and playbook structures to automate frequent operations on multiple layers, including using it with OpenStack.

Read more at http://tm3.org/bs
Quelle: RDO

Automated notifications from Azure Monitor for Atlassian JIRA

The public preview of Azure Monitor was recently announced at Ignite. This new platform service builds on some of the existing monitoring capabilities to provide a consolidated and inbuilt monitoring experience to all Azure users.

From within the Azure portal, you can use Azure Monitor to query across Activity Logs, Metrics and Diagnostic Logs. If you need the advanced monitoring and analytics tools like Application Insights, Azure Log Analytics and Operations Management Suite (OMS), the Azure Monitor blade contains quick links. You can also leverage the dashboard experience in the portal to visualize your monitoring data and share it with others in your team.

The consolidated Azure Monitor blade in the portal allows you to quickly and centrally manage alerts from the following sources:

Metrics
Events (eg. Autoscale events)
Locations (Application Insights Web Tests)
Proactive diagnostics (Application Insights)

These alerts can be configured to send an email and also in the case of the Metrics and Web Tests POST to a webhook. This allows for easy integration with external platforms.

Integrating Azure Monitor with Atlassian JIRA

Atlassian JIRA is a familiar solution to many IT, software and business teams. It&;s an ideal candidate for connecting to the Azure Monitor service via the webhook mechanism in order to create JIRA Issues from Metric and Web Test Alerts.

"Azure Notifications with JIRA marries critical operational events with JIRA issues to help teams stay on top of app performance, move faster, and streamline their DevOps processes," said Bryant Lee, head of product partnerships and integrations at Atlassian.

Add-ons can be built for JIRA, Confluence, HipChat and BitBucket to extend their capabilities. In order to make the process of deploying the add-on as easy as possible, we&039;ve built this Azure Notifications add-on to be deployed and hosted on an Azure Web App which is connected to your JIRA instance via the Manage add-ons functionality in the JIRA Administration screen. The add-on establishes a secret, key exchange and other private details with JIRA that is used to secure, sign and verify all future communication between the two. All of this security information is stored in Azure Key Vault.

The add-on exposes token secured endpoints that can be configured in Azure Monitor against the webhooks exposed for various alerting mechanisms. Alerts will flow from Azure Monitor into the token secured endpoints. The add-on will then transform the payloads from the Azure Monitor alerts and securely create the appropriate Issue in JIRA.

Relevant information is extracted from the Azure Monitor alerts and highlighted in the Issue. The full Azure Monitor alert payload is included for reference.

Deploying the add-on

The Azure Notifications for Atlassian JIRA add-on is available today in Bitbucket for you to deploy and connect your JIRA instance and Azure Monitor alerts. The overview section of the add-on&039;s repository provides documentation on the add-on and how to install it and all its associated infrastructure in Azure.

Once installed, the add-on will appear in your add-ons list in JIRA. You can then configure your Azure Monitor Alerts to send alerts to the add-on.

If you have resources deployed in Azure and are using JIRA, then this add-on has just made it really simple for you to start creating issues from your Azure Monitor alerts today!

For more information

Announcing the public preview of Azure Monitor
Get Started with Azure Monitor
Operations Management Suite (OMS)
Azure Log Analytics
Application Insights
JIRA REST API Documentation
JIRA REST API Reference

Quelle: Azure