Top 5 Blogs of 2017: Docker Platform and Moby Project add Kubernetes

As we count down the final days of 2017, we would like to bring you the final installment of the top 5 blogs of 2017. On day 5, we take a look back DockerCon EU, when we announced Kubernetes support in the Docker platform. This blog takes an in-depth look at the industry-leading container platform and the addition of Kubernetes.

The Docker platform is integrating support for Kubernetes so that Docker customers and developers have the option to use both Kubernetes and Swarm to orchestrate container workloads. Register for beta access and check out the detailed blog posts to learn how we’re bringing Kubernetes to:

Docker Enterprise Edition
Docker Community Edition on the desktop with Docker for Mac and Windows
The Moby Project

Docker is a platform that sits between apps and infrastructure. By building apps on Docker, developers and IT operations get freedom and flexibility. That’s because Docker runs everywhere that enterprises deploy apps: on-prem (including on IBM mainframes, enterprise Linux and Windows) and in the cloud. Once an application is containerized, it’s easy to re-build, re-deploy and move around, or even run in hybrid setups that straddle on-prem and cloud infrastructure.
The Docker platform is composed of many components, assembled in four layers:

containerd is an industry-standard container runtime implementing the OCI standards
Swarm orchestration that transforms a group of nodes into a distributed system
Docker Community Edition providing developers a simple workflow to build and ship container applications, with features like application composition, image build and management
Docker Enterprise Edition, to manage an end to end secure software supply chain and run containers in production

These four layers are assembled from upstream components that are part of the open source Moby Project.
Docker’s design philosophy has always been about providing choice and flexibility. This is important for customers that are integrating Docker with existing IT systems, and that’s why Docker is built to work well with already-deployed networking, logging, storage, load balancers and CI/CD systems. For all of these (and more), Docker relies on industry-standard protocols or published and documented interfaces. And for all of these, Docker Enterprise Edition ships with sensible defaults, but those defaults can be swapped for certified third party options for customers that have existing systems or prefer an alternative solution.
In 2016, Docker added orchestration to the platform, powered by the SwarmKit project. In the past year, we’ve received lots of positive feedback on Swarm: it’s easy to set up, is scalable and is secure out-of-the-box.
We’ve also gotten feedback that some users really like the integrated Docker platform with end-to-end container management, but that they want to use other orchestrators, like Kubernetes, for container scheduling. Either because they’ve already designed services to work on Kubernetes or because Kubernetes has particular features they’re looking for. This is why we are adding Kubernetes support as an orchestration option (alongside Swarm) in both Docker Enterprise Edition, and in Docker for Mac and Windows.

We’re also working on innovative components that make it easier for Docker users to deploy Docker apps natively with Kubernetes orchestration. For example, by using Kubernetes extension mechanisms like Custom Resources and the API server aggregation layer, the coming version of Docker with Kubernetes support will allow users to deploy their Docker Compose apps as Kubernetes-native Pods and Services.
With the next version of the Docker platform, developers can build and test apps destined for production directly on Kubernetes, on their workstation. And ops can get all the benefits of Docker Enterprise Edition – secure multi-tenancy, image scanning and role-based access control – while running apps in production orchestrated with either Kubernetes or Swarm.
The Kubernetes version that we’re incorporating into Docker will be the vanilla Kubernetes that everyone is familiar with, direct from the CNCF.  It won’t be a fork, nor an outdated version, nor wrapped or limited in any way.
Through the Moby Project, Docker has been working to adopt and contribute to Kubernetes over the last year. We’ve been working on containerd (now 1.0)  and cri-containerd for the container runtime, on InfraKit for creating and managing Kubernetes installs, and on libnetwork for overlay networking. See the Moby Project blog post for more examples and details.
Docker and Kubernetes share much lineage, are written using the same programming language and have overlapping components, contributors and ideals. We at Docker are excited to have Kubernetes support in our products and into the open source projects we work on. And we can’t wait to work with the Kubernetes community to make containers and container-orchestration ever more powerful and easier to use.
While we’re adding Kubernetes as an orchestration option in Docker, we remain committed to Swarm and our customers and users that rely on Swarm and Docker for running critical apps at scale in production. To learn more about how Docker is integrating Kubernetes, check out the sessions “What’s New in Docker” and “Gordon’s Secret Session” at DockerCon EU.
Where to go from here?

Sign up for the Kubernetes for Docker beta
Docker Enterprise Edition with Kubernetes
Community Edition for Mac and Windows with Kubernetes
Moby and Kubernetes

#Docker Platform and @Moby Project add @KubernetesioClick To Tweet

The post Top 5 Blogs of 2017: Docker Platform and Moby Project add Kubernetes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Using Docker to Scale Operational Intelligence at Splunk

Splunk wants to make machine data accessible, usable and valuable to everyone. With over 14,000 customers in 110 countries, providing the best software for visualizing machine data involves hours and hours of testing against multiple supported platforms and various configurations. For Mike Dickey, Sr. Director in charge of engineering infrastructure at Splunk, the challenge was that 13 different engineering teams in California and Shanghai had contributed to test infrastructure sprawl, with hundreds of different projects and plans that were all being managed manually.
At DockerCon Europe, Mike and Harish Jayakumar, Docker Solutions Engineer, shared how Splunk leveraged Docker Enterprise Edition (Docker EE) to dramatically improve build and deployment times on their test infrastructure, converge on a unified Continuous Integration (CI) workflow, and how they’ve now grown to 600 bare-metal servers deploying tens of thousands of Docker containers per day.
You can watch the entire session here:

Hitting the Limits of Manual Test Configurations
As Splunk has grown, so has their customers’ use of their software. Many Splunk customers now process petabytes of data, and that has forced Splunk to scale their testing to match. That means more infrastructure needs to be reserved in the shared test environment for these large-scale tests. Besides running out of data center capacity, reserving test infrastructure was being managed manually through a Wiki page – a process with obvious limitations.

At the time Mike was leading the Performance Engineering team, and they had started working with Docker containers. Seeing near-bare metal performance for containerized applications, Splunk began to test Docker in smaller proof-of-concept projects and saw that it could be effective for performance testing. They saw the ability to leverage Docker as the foundation for a unified test and CI platform.
Building a App Development Platform with Docker EE

Splunk chose Docker EE to power their test and CI platform for a few key reasons:

Windows and Linux support: Splunk software runs on both Linux and Windows and so they wanted a single solution that could support both Linux and Windows
Role-Based Access Control: As the environment is a shared resource between multiple teams, Splunk needed a way to integrate with Active Directory and assign resources by roles.
Consistent Dev Experience: With most developers already using Docker on their desktops, Splunk wanted to maintain a consistent experience with support for Docker APIs and the use of Compose files.
Vendor to Partner With: Given the scale of this project, Splunk wanted to work with a vendor who would be their partner. A bonus was that our offices were only a few blocks apart.

Results and What’s Next
Today, Docker EE powers Splunk’s CI and test platforms. As part of the CI solution, Splunk is leveraging Docker to create an agentless Jenkins workflow where each build stage is replaced by a container. This delivers a more consistent and scalable experience (2000 concurrent jobs today vs. 200 per master with standard agents) that is much more efficient as well. For performance testing, teams can reserve an entire host to get accurate performance results. These can be dynamically provisioned for different configurations in minutes instead of days.

At Splunk, the Docker EE environment has grown from 150 servers to now 600 servers, starting with one team of developers to now 385 unique developers who deploy between 10,000 and 20,000 containers a day. In addition to the fast deployment times, Splunk is seeing more efficient use of the hardware than before, averaging 75% utilization of the available capacity. With the platform in place, the developers at Splunk have a simple and fast way to provision and execute tests. As a result, Splunk has seen an increase in testing frequency, which is helping to improve product quality.

Check out how @Splunk used #Docker EE to scale and deploy 10,000+ containers per dayClick To Tweet

To learn more about Docker EE, check out the following resources:

Learn more about Docker EE
Try Docker EE for yourself
Contact Sales for more information

The post Using Docker to Scale Operational Intelligence at Splunk appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

5 tips to learn Docker in 2018

As the holiday season ends, many of us are making New Year’s resolutions for 2018. Now is a great time to think about the new skills or technologies you’d like to learn. So much can change each year as technology progresses and companies are looking to innovate or modernize their legacy applications or infrastructure. At the same time the market for Docker jobs continues to grow as companies such as Visa, MetLife and Splunk adopt Docker Enterprise Edition ( EE) in production. So how about learning Docker in 2018 ? Here are a few tips to help you along the way.
 

1. Play With Docker: the Docker Playground and Training site
 
Play with Docker (PWD) is a Docker playground and training site which allows users to run Docker commands in a matter of seconds. It gives the experience of having a free Linux Virtual Machine in browser, where you can build and run Docker containers and even create clusters. Check out this video from DockerCon 2017 to learn more about this project. The training site is composed of a large set of Docker labs and quizzes from beginner to advanced level available for both Developers and IT pros at  training.play-with-docker.com.

 
2. DockerCon 2018
 
In case you missed it, DockerCon 2018 will take place at Moscone Center, San Francisco, CA on June 13-15, 2018. DockerCon is where the container community comes to learn, belong, and collaborate. Attendees are a mix of beginner, intermediate, and advanced users who are all looking to level up their skills and go home inspired. With a 2 full days of training, more than 100 sessions, free workshops and hands-on labs and the wealth of experience brought by each attendee, DockerCon is the place to be if you’re looking to learn Docker in 2018.

 
3. Docker Meetups
 
Look at our Docker Meetup Chapters page to see if there is a Docker user group in your city. With more than 200 local chapters in 81 countries, you should be able to find one near you! Attending local Docker meetups are an excellent way to learn Docker. The community leaders who run the user group often schedule Docker 101 talks and hands-on training for newcomers!
Can’t find a chapter near you? Join the Docker Online meetup group to attend meetups remotely!

 
4. Docker Captains
 
Captains are Docker experts that are leaders in their communities, organizations or ecosystems. As Docker advocates, they are committed to sharing their knowledge and do so every chance they get! Captains are advisors, ambassadors, coders, contributors, creators, tool buil
ders, speakers, mentors, maintainers and super users and are required to be active stewards of Docker in order to remain in the program.
Follow all of the Captains on twitter. Also check out the Captains GitHub repo to see what projects they have been working on. Docker Captains are eager to bring their technical expertise to new audiences both offline and online around the world – don’t hesitate to reach out to them via the social links on their Captain profile pages. You can filter the captains by location, expertise, and more.

5. Training and Certification
 
The new Docker Certified Associate (DCA) certification, launching at DockerCon Europe on October 16, 2017, serves as a foundational benchmark for real-world container technology expertise with Docker Enterprise Edition. In today’s job market, container technology skills are highly sought after and this certification sets the bar for well-qualified professionals. The professionals that earn the certification will set themselves apart as uniquely qualified to run enterprise workloads at scale with Docker Enterprise Edition and be able to display the certification logo on r
esumes and social media profiles. Want to be as prepared as you can be? Check out our study guide with sample questions and exam preparation tips before you schedule your exam.
 

 

5 tips to learn #docker in 2018: @playwithdocker @dockercon #dockercaptain #dockermeetupClick To Tweet

The post 5 tips to learn Docker in 2018 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

GoPro steigt aus dem Drohnengeschäft aus

Nach der missglückten Einführung der Kameradrohne "Karma" und angesichts der harten Konkurrenz gibt GoPro auf. Das Unternehmen zieht sich aus dem Markt zurück. Zuvor war bereits berichtet worden, dass GoPro Hunderte Mitarbeiter entlässt.

Quelle: Heise Tech News

Unterstützung von Apache Spark 2.2.1 mit Integration von Amazon SageMaker und Apache Hive 2.3.2 auf Amazon EMR Version 5.11.0

Sie können jetzt Apache Spark 2.2.1, Apache Hive 2.3.2 und die Amazon SageMaker-Integration mit Apache Spark auf Amazon EMR Version 5.11.0 verwenden. Spark 2.2.1 und Hive 2.3.2 enthalten mehrere Fehlerkorrekturen und Verbesserungen. Amazon SageMaker Spark ist eine als Open Source bereitgestellte Spark-Bibliothek für Amazon SageMaker, einen vollständig verwalteten Service, der Machine Learning-Modelle jeder Größenordnung erstellen zu trainieren und bereitstellen kann. Sie ermöglicht Ihnen, Spark-Stufen und Stufen, die mit Amazon SageMaker interagieren, in Ihren Spark ML Pipelines überlappend zuzulassen, sodass Sie Modelle mit Spark DataFrames in Amazon SageMaker mit von Amazon bereitgestellten ML-Algorithmen trainieren können, wie beispielsweise K-Means-Clustering oder XGBoost.
Quelle: aws.amazon.com