Interview mit Nvidias Vice President zu Volta: Transferrate wichtiger als mehr Speicher
Volta ist die erste GPU, die sich für HPC- und Deep-Learning-Anwendungen gleichermaßen eignet, sagt Nvidias Vice President Ian Buck.
Quelle: Heise Tech News
Volta ist die erste GPU, die sich für HPC- und Deep-Learning-Anwendungen gleichermaßen eignet, sagt Nvidias Vice President Ian Buck.
Quelle: Heise Tech News
Die Kryptowährung Bitcoin hat ihren Kurs in den vergangenen Monaten verdoppelt und notierte am Wochenende erstmals über 2.000 US-Dollar. Mit der Ransomware-Epidemie Wanna Cry dürfte das allerdings nur am Rande zu tun haben. (Bitcoin, Internet)
Quelle: Golem
Ein Nokia-Experte sagt, dass Terabit DSL eher zu spät komme. Wichtiger als das Konzept des DSL-Erfinder John Cioffi sei die Verbreitung von G.fast und künftig XG-Fast, was nun als G.mgfast entwickelt wird. (DSL, Telekommunikation)
Quelle: Golem
Amazon holt nach langer Kritik an zu viel Toleranz gegenüber betrügerischen Shops nun offenbar mit einer großen Keule aus. Als unangenehme Nebenwirkung stehen derzeit viele harmlose Händler ohne Umsatz da.
Quelle: Heise Tech News
Obwohl die Mehrheit der Deutschen eine E-Mail-Verschlüsselung für wichtig hält, nutzen nur 16 Prozent das Verfahren, um ihre elektronische Post zu schützen. Laut einer Umfrage ist vielen die Einrichtung zu kompliziert.
Quelle: Heise Tech News
Gezeichnete Nacktheit ist okay, fotografierte nicht: Mit derlei Vorgaben sollen Tausende Moderatoren bei Facebook für eine “saubere” Timeline sorgen. Eine britische Zeitung hat jetzt interne Dokumente zugespielt bekommen. (Facebook, Soziales Netz)
Quelle: Golem
Kunden der Deutschen Bahn können im ICE nun auch wieder mit Paypal bezahlen. Die Bahn hat eine DNS-Fehlkonfiguration behoben, die zuvor zu merkwürdigen Fehlermeldungen geführt hatte.
Quelle: Heise Tech News
The softball-sized UE Wonderboom produces clear, robust sound with surprisingly full bass.

No outdoor summer scene is complete without a portable Bluetooth speaker, the quintessential good-weather gadget. Their popularity is due largely to the fact that they’re affordable, and, like most technology these days, mobile-friendly. But with over 25k results for “portable Bluetooth speaker” on Amazon alone, the number of speaker options to choose from can be overwhelming for someone who’s looking for something that’s cheap and good.
The thing is, most of those speakers on Amazon are bad (I know, because I’ve tried dozens of them). But the Wonderboom, Ultimate Ears’ new $100 entry-level speaker unveiled in March, doesn’t suck. It’s actually pretty great. I’ve been reviewing the speaker for a month and a half, alongside its closest competitor, the JBL Flip 4, which is also $100.
The two models have everything you’d want from party-friendly, portable speakers. Both are waterproof, rugged, come in a variety of colors, and have day-long battery lives. But in my testing, the Wonderboom was better than the Flip 4 where it really counts: playing music.
BuzzFeed News; Ultimate Ears

The Wonderboom is is designed to make up for the lack of bass in its predecessor, last year’s UE Roll. The speaker handled songs like Chance the Rapper’s No Problem impressively well, with full-sounding bass and crisp high frequencies.
In a blind music test, BuzzFeed video producer Allyson Laquian decisively chose the Wonderboom as the better speaker as soon as LCD Soundsystem’s “Dance Yrself Clean” came on. The Wonderboom accentuated the song’s *thump* very clearly, while the Flip 4 sounded stilted in comparison.
My boyfriend Will also prefered the Wonderboom, but for a different reason. The treble on the JBL Flip 4 is so high, he said, that it’s “like having a snake in your ear.”
I agree. The JBL Flip 4 tends to over-accentuate treble at its highest volumes (close to 90 decibels, its maximum output). And while I found that the Wonderboom is better at producing bass than the JBL speaker, it too starts to break down at high volume levels (close to 86 decibels, its volume max).
The Wonderboom sounds better than the JBL Flip 4 not only because of the quality of its speakers, but also how those speakers are placed in the actual device.
The JBL speaker is shaped like a cylinder, and has two “bass radiators” on its ends that vibrate to the beat. It’s designed to play music while upright or on its side but, during my testing, sounded distorted while upright (because it mutes the bass). Additionally, there’s a “front” and “back” to the speaker. You can tell when you’re behind the speaker, because the music gets quiet.
Nicole Nguyen / BuzzFeed News

The Wonderboom, which is shaped like a small but portly grapefruit, only has one orientation: upright. It also doesn’t have a “front” and “back” thanks to what Ultimate Ears calls “360-degree sound,” created by two active and two passive drivers positioned around the speaker. Music comes out in all directions on the Wonderboom. So whether you’re in front of, behind, or to the side of the Wonderboom, it’ll sound the same, no matter where you are.
Nicole Nguyen / BuzzFeed News

Pictured here is my beloved UE Roll which is, sadly, now at the bottom of Lake Berryessa in California. The UE Roll is waterproof and comes with a floating life preserver, designed specifically for the speaker, but because I naively thought the stretchy bungee cord that comes with the speaker was strong enough to hang onto the side of the boat, I didn’t bring the preserver along. The Roll did not survive a choppy ride back to the marina.
BuzzFeed News / Nicole Nguyen
Quelle: <a href="This Is The Waterproof, Bass-Bumping Bluetooth Speaker You Want“>BuzzFeed
I have spent a lot of my professional career working as an IT Consultant/Architect. In those positions, you talk to many customers with different backgrounds, and see companies that run their IT in many different ways. Back in 2014, I joined the OpenStack Engineering team at Red Hat, and started being involved with the OpenStack community. And guess what, I found yet another way of managing IT.
These last 3 years have taught me a lot about how to efficiently run an IT infrastructure at scale, and what’s better, they proved that many of the concepts I had been previously preaching to customers (automate, automate, automate!) are not only viable, but also required to handle ever-growing requirements with a limited team and budget.
So, would you like to know what I have learnt so far in this 3-year journey?
Processes
The OpenStack community relies on several processes to develop a cloud operating system. Most of these processes have evolved over time, and together they allow a very large contributor base to collaborate effectively. Also, we need to manage a complex infrastructure to support this our processes.
Infrastructure as code: there are several important servers in the OpenStack infrastructure, providing service to thousands of users every day: the Git repositories, the Gerrit code review infrastructure, the CI bits, etc. The deployment and configuration of all those pieces is automated, as you would expect, and the Puppet modules and Ansible playbooks used to do so are available at their Git repository. There can be no snowflakes, no “this server requires a very specific configuration, so I have to log on and do it manually” excuses. If it cannot be automated, it is not efficient enough. Also, storing our infrastructure definitions as code allows us to take changes through peer-review and CI before applying in production. More about that later.
Development practices: each OpenStack project follows the same structure:
There is a Project Team Leader (PTL), elected from the project contributors every six months. A PTL acts as a project coordinator, rather than a manager in the traditional sense, and is usually expected to rotate every few cycles.
There are several core reviewers, people with enough knowledge on the project to judge if a change is correct or not.
And then we have multiple project contributors, who can create patches and peer-review other people’s patches.
Whenever a patch is created, it is sent to review using a code review system, and then:
It is checked by multiple CI jobs, that ensure the patch is not breaking any existing functionality.
It is reviewed by other contributors.
Peer review is done by core reviewers and other project contributors. Each of them have the rights to provide different votes:
A +2 vote can only be set by a core reviewer, and means that the code looks ok for that core reviewer, and he/she thinks it can be merged as-is.
Any project contributor can set a +1 or -1 vote. +1 means “code looks ok to me” while -1 means “this code needs some adjustments”. A vote by itself does not provide a lot of feedback, so it is expanded by some comments on what should be changed, if needed.
A -2 vote can only be set by a core reviewer, and means that the code cannot be merged until that vote is lifted. -2 votes can be caused by code that goes against some of the project design goals, or just because the project is currently in feature freeze and the patch has to wait for a while.
When the patch passes all CI jobs, and received enough +2 votes from the core reviewers (usually two), it goes through another round of CI jobs and is finally merged in the repository.
This may seem as a complex process, but it has several advantages:
It ensures a certain level of quality on the master branch, since we have to ensure that CI jobs are passing.
It encourages peer reviews, so code should always be checked by more than one person before merging.
It engages core reviewers, because they need to have enough knowledge of the project codebase to decide if a patch deserves a +2 vote.
Use the cloud: it would not make much sense to develop a cloud operating system if we could not use the cloud ourselves, would it? As expected, all the OpenStack infrastructure is hosted in OpenStack-based clouds donated by different companies. Since the infrastructure deployment and configuration is automated, it is quite easy to manage in a cloud environment. And as we will see later, it is also a perfect match for our continuous integration processes.
Automated continuous integration: this is a key part of the development process in the OpenStack community. Each month, 5000 to 8000 commits are reviewed in all the OpenStack projects. This requires a large degree of automation in testing, otherwise it would not be possible to review all those patches manually.
Each project defines a number of CI jobs, covering unit and integration tests. These projects are defined as code using Jenkins Job Builder, and reviewed just like any other code contribution.
For each commit:
Our CI automation tooling will spawn short-lived VMs in one of the OpenStack-based clouds, and add them to the test pool
The CI jobs will be executed on those short-lived VMs, and the test results will be fed back as part of the code review
The VM will be deleted at the end of the CI job execution
This process, together with the requirement for CI jobs to pass before merging any code, minimizes the amount of regressions in our codebase.
Use (and contribute to) Open Source: one of the “Four Opens” that drive the OpenStack community is Open Source. As such, all of the development and infrastructure processes happen using Open Source software. And not just that, the OpenStack community has created several libraries and applications with great potential for reuse outside the OpenStack use case. Applications like Zuul and nodepool, general-purpose libraries like pbr, or the contributions to the SQLAlchemy library are good examples of this.
Tools
So, which tools do we use to make all of this happen? As stated above, the OpenStack community relies on several open source tools to do its work:
Infrastructure as code
Git to store the infrastructure definitions
Puppet and Ansible as configuration management and orchestration tools
Development
Git as a code repository
Gerrit as a code review and repository management tool
Etherpad as a collaborative editing tool
Continuous integration
Zuul as an orchestrator of the gate checks
Nodepool to automate the creation and deletion of short-lived VMs for CI jobs across multiple clouds
Jenkins to execute CI jobs (actually, it has now been replaced by Zuul itself)
Jenkins Job Builder as a tool to define CI jobs as code
Replicating this outside OpenStack
It is perfectly possible to replicate this model outside the OpenStack community. We use it in RDO, too! Although we are very closely related to OpenStack, we have our own infrastructure and tools, following a very similar process for development and infrastructure maintenance.
We use an integrated solution, SoftwareFactory, which includes most of the common tools described earlier (and then some other interesting ones). This allows us to simplify our toolset and have:
Infrastructure as code
https://github.com/rdo-infra contains the definitions of our infrastructure components
Development and continuous integration
https://review.rdoproject.org, our SoftwareFactory instance, to integrate our development and CI workflow
Our own RDO Cloud as an infrastructure provider
You can do it, too
Implementing this way of working in an established organization is probably a non-straightforward task. It requires your IT department and application owners to become as cloud-conscious as possible, reduce the amount of micro-managed systems to a minimum, and establish a whole new way of managing your development… But the results speak for themselves, and the OpenStack community (also RDO!) is a proof that it works.
Quelle: RDO
Das Mikrocontroller-Board Arduino Cinque vereint die beiden neuen Lieblingsmikrocontroller für Bastler. Obwohl noch kein Preis bekannt ist, wird die Platine aber nicht ganz billig. (Arduino, Technologie)
Quelle: Golem