Abbau des AKW Biblis hat begonnen
Innerhalb von 15 Jahren sollen die Anlagenteile des Kraftwerks so weit zurückgebaut werden, dass die Gebäude und das Gelände aus dem Atomgesetz entlassen werden können.
Quelle: Heise Tech News
Innerhalb von 15 Jahren sollen die Anlagenteile des Kraftwerks so weit zurückgebaut werden, dass die Gebäude und das Gelände aus dem Atomgesetz entlassen werden können.
Quelle: Heise Tech News
Facebook, Twitter und YouTube nehmen nun zwar doppelt so oft beanstandete Texte von ihren Seiten wie noch vor einem halben Jahr. In vierzig Prozent der Fälle blieb die gewünschte Reaktion allerdings weiter aus.
Quelle: Heise Tech News
At DockerCon 2017 we introduced LinuxKit: A toolkit for building secure, lean and portable Linux subsystems. For this Online Meetup, Docker Technical Staff member Rolf Neugebauer gave an introduction to LinuxKit, explained the rationale behind its development and gave a demo on how to get started using it.
Watch the recording and slides
Additional Q&A
You said the ONBOOT containers are run sequentially, does it wait for one to finish before it starts the next?
Yes, the nest ONBOOT container is only started once the previous one finished.
How do you make our own kernel to use?
See ./docs/kernels.md
How you would install other software that is not a container per say – eg sshd?
Everything apart from the init process and runc/containerd run in a container. There is an example under ./examples/sshd.yml on how to run a SSH server.
Can I load kernel modules – iptables/conntrack for example?
Yes. You can compile modules and add them to the image as described in ./docs/kernels.md. There is an open issue to allow compilation of kernel modules at run time.
Does it have to be Alpine linux – can it be say minimal Debian?
We mainly use Alpine for packages. The base rootfile system is basically busybox with a minimal init system, which we are planning to replace with a custom init program. You can create packages with Debian, if you like.
How we make data persistent like docker volumes to outside of linuxkit box?
There are examples on how to format/mount and use persistent disks, e.g., ./examples/docker.yml which uses a persistent disk to store docker images.
Bonus Talk: LinuxKit Security SIG
Learn more about #LinuxKit by @neugebar. Slides and recording from the latest online #meetup now upClick To Tweet
Learn more about LinuxKit and other components of the Moby Project
Attend the Moby Summit on 6/19 in San Francisco
Read more about LinuxKit
Stay up to date! Weekly LinuxKit Status Reports
More questions about LinuxKit? Join the Docker Community Slack: #linuxkit channel
The post Online meetup recap: Introduction to LinuxKit appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/
Die neue Horizon-Box des Kabelnetzbetreibers Unitymedia bietet kein Modem mehr, soll aber 4K-Inhalte unterstützen. Auch ein eigener 4K-Kanal soll in Deutschland im Herbst kommen. (Netflix, Kabelnetz)
Quelle: Golem
Sehr alte Betriebssysteme scheitern auf Ryzen-Prozessoren an einem Fehler in den Virtual-8086 Mode Extensions (VME); BIOS-Updates mit AGESA 1.0.0.6 sollen das Problem lösen.
Quelle: Heise Tech News
Der Betreiber der ehemaligen Drogen-Handeslplattform im TOR-Netz kann vorerst nicht auf eine Verkürzung seiner Strafe hoffen. Sein Antrag auf Berufung gegen eine lebenslängliche Haftstrafe wurde abgelehnt.
Quelle: Heise Tech News
"Auf staatlicher Ebene machen wir so etwas nicht, und wir haben es auch nicht vor", sagte der russische Präsident. Russland wolle im Gegenteil gegen Cyber-Kriminalität vorgehen.
Quelle: Heise Tech News
Die Bundesdatenschutzbeauftragte Andrea Voßhoff hat sich in einem Symposium zum vernetzten Fahren dafür ausgesprochen, technische Zulassungsanforderungen um die Privatsphäre und IT-Sicherheit zu ergänzen.
Quelle: Heise Tech News
Applications often consist of more than one executable and have programs that need to be launched together. Learn about different application coupling options in Kubernetes pertinent to ownership, colocation, and communication requirements and see a concrete example with all the characteristics of a real-world setup in the context of application coupling.
Quelle: OpenShift
By Justin Kestelyn, Google Cloud Platform
Now that Cloud Spanner is generally available for mission-critical production workloads, it’s time to tell how Spanner evolved into a global, strongly consistent relational database service.
Recently the Spanner team presented a new paper at SIGMOD ‘17 that offers some fascinating insights into this aspect of Spanner’s “database DNA” and how it developed over time.
Spanner was originally designed to meet Google’s internal requirements for a global, fault-tolerant service to power massive business-critical applications. Today Spanner also embraces the SQL functionality, strong consistency and ACID transactions of a relational database. For critical use cases like financial transactions, inventory management, account authorization and ticketing/reservations, customers will accept no substitute for that functionality.
For example, there’s no “spectrum” of less-than-strong consistency levels that will satisfy the mission-critical requirement for a single transaction state that’s maintained worldwide; only strong consistency will do. Hence, few if any customers would choose to use an eventually-consistent database for critical OLTP. For Cloud Spanner customers like JDA, Snap and Quizlet, this unique feature set is already resonating.
Here are a few highlights from the paper:
Although Spanner was initially designed as a NoSQL key-value store, new requirements led to an embrace of the relational model, as well. Spanner’s architects had a relatively specific goal: to provide a service that could support fault-tolerant, multi-row transactions and strong consistency across data centers (with significant influence — and code — from Bigtable). At the same time, internal customers building OLTP applications also needed a database schema, cross-row transactions and an expressive query language. Thus early in Spanner’s lifecycle, the team drew on Google’s experience building the F1 distributed relational database to bring robust relational semantics and SQL functionality into the Spanner architecture. “These changes have allowed us to preserve the massive scalability of Spanner, while offering customers a powerful platform for database applications,” the authors wrote, adding that, “From the perspective of many engineers working on the Google infrastructure, the SQL vs. NoSQL dichotomy may no longer be relevant.”
The Spanner SQL query processor, while recognizable as a standard implementation, has unique capabilities that contribute to low-latency queries. Features such as query range extraction (for runtime analysis of complex expressions that are not easily re-written) and query restarts (compensating for failures, resharding, and other anomalies without significant latency impact) mitigate the complexities of highly distributed queries that would otherwise contribute to latency. Furthermore, the query processor serves both transactional and analytical workloads for low-latency or long-running queries.
Long-term investments in SQL tooling have produced a familiar RDBMS-like user experience. As part of a companywide effort to standardize on common SQL functionality for all its relational services (Spanner, Dremel/BigQuery, F1, and so on), Spanner’s user experience emphasizes ANSI SQL constructs and support for nested data as a first-class citizen. “SQL has provided significant additional value in expressing more complex data access patterns and pushing computation to the data, ” the authors wrote.
Spanner will soon rely on a new columnar format called Ressi designed for database-like access patterns (for hybrid OLAP/OLTP workloads). Ressi is optimized for time-versioned (rapidly changing) data, allowing queries to more efficiently find the most recent values. Later in 2017, Ressi will replace the SSTables format inherited from Bigtable, which although highly robust, are not explicitly designed for performance.
All in all, “Our path to making Spanner a SQL system led us through the milestones of addressing scalability, manageability, ACID transactions, relational model, schema DDL with indexing of nested data, to SQL,” the authors wrote.
For more details, read the full paper here.
Quelle: Google Cloud Platform