Azure introduces a new blockchain proof of concept framework for developers

Microsoft is laser-focused on enabling and accelerating enterprise adoption of blockchain technologies. Our blockchain offerings are well known for providing the ability to rapidly and consistently deploy blockchain infrastructure.

However, as our customers and partners began to build their distributed applications, they identified the application layer as an area where Microsoft could take even greater steps to reduce the time and cost associated with blockchain Proof of Concept (PoC) projects. 

More time on smart contracts, less time on “scaffolding”

When our customers and partners estimate the time and costs for developing a blockchain PoC, they often find that it can take 8-12 weeks and cost as much as $300,000. Besides being time consuming and expensive, this is a huge missed opportunity. Quickly understanding the viability of a PoC can accelerate a business’s understanding of blockchain and save the time and cost associated with a less impactful project.

Microsoft identified that most of the time in these PoC projects was spent developing code and building capabilities that surrounded the blockchain, often referred to as “scaffolding.”  That scaffolding typically required building a responsive web client, writing and deploying a gateway API, implementing support for off-chain storage in technologies such as SQL DB, building out reporting and analytics, and integrating identity and key vault services into the solution.

Lower costs and faster time to value with a PoC Framework

We realized there were a common set of challenges related to PoC development that we could address by creating a type of “Proof of Concept Framework” that would dramatically reduce the amount of time needed to build a blockchain PoC.

The framework provides code assets and ARM template driven deployment for all the scaffolding needed for blockchain PoCs, including the blockchain network, a gateway API, a responsive web application, Azure Active Directory integration, Azure Key Vault integration, SQL DB that is configured and collecting on-chain data, and a set of supporting code and services such as a Hashing Service and a Signing Service. The framework uses Azure’s Event Hubs at its core, which provides the ability to readily add new capabilities such as sending raw data to Azure Data Lake or providing transaction data to Azure Search.

The framework also makes it possible to create the web application without writing any code. It uses meta-data provided for smart contracts to dynamically deliver a contextual user experience for participants. Since the framework populates SQL DB as an off-chain store, it enables an organization to leverage existing skills and tools to light up additional capabilities such as APIs, reporting with PowerBI, chat bots, Azure Data Factory, R, and machine learning.

With the framework, customers and partners can focus on creating truly innovative applications that demonstrate the potential of blockchain, and spend less time and resources on integration tasks that required to get even a basic PoC up and running.

At the Consensus Conference in New York, we’re looking forward to the opportunity to demonstrate the framework for the first time, and to connect with customers and partners to discuss how it can help significantly accelerate blockchain PoC development.
Quelle: Azure

Running (and recording) fully automated GUI tests in the cloud

The problem

Software Factory is a
full-stack software development platform: it hosts repositories, a bug tracker and
CI/CD pipelines. It is the engine behind RDO’s CI pipeline,
but it is also very versatile and suited for all kinds of software projects. Also,
I happen to be one of Software Factory’s main contributors. :)

Software Factory has many cool features that I won’t list here, but among these
is a unified web interface that helps navigating through its components. Obviously
we want this interface thoroughly tested; ideally within Software Factory’s
own CI system, which runs on test nodes being provisioned on demand on an OpenStack
cloud (If you have read Tristan’s previous article,
you might already know that Software Factory’s nodes are managed and built
by Nodepool).

When it comes to testing web GUIs, Selenium is
quite ubiquitous because of its many features, among which:

it works with most major browsers, on every operating system
it has bindings for every major language, making it easy to write GUI tests
in your language of choice.¹

¹ Our language of choice, today, will be python.

Due to the very nature of GUI tests, however, it is not easy to fully automate
Selenium tests into a CI pipeline:

usually these tests are run on dedicated physical machines for each operating
system to test, making them choke points and sacrificing resources that could be
used somewhere else.
a failing test usually means that there is a problem of a graphical nature;
if the developer or the QA engineer does not see what happens it is difficult
to qualify and solve the problem. Therefore human eyes and validation are still
needed to an extent.

Legal issues preventing running Mac OS-based virtual machines on non-Apple
hardware aside, it is
possible to run Selenium tests on virtual machines without need for a physical
display (aka “headless”) and also capture what is going on during these tests for
later human analysis.

This article will explain how to achieve this on linux-based distributions,
more specifically on CentOS.

Running headless (or “Look Ma! No screen!”)

The secret here is to install Xvfb (X virtual framebuffer) to emulate a display
in memory on our headless machine …

My fellow Software Factory dev team and I have configured Nodepool to provide us
with customized images based on CentOS on which to run any kind of
jobs. This makes sure that our test nodes are always “fresh”, in other words that
our test environments are well defined, reproducible at will and not tainted by
repeated tests.

The customization occurs through post-install scripts: if you look at our
configuration repository,
you will find the image we use for our CI tests is sfstack-centos-7 and its
customization script is sfstack_centos_setup.sh.

We added the following commands to this script in order to install
the dependencies we need:

sudo yum install -y firefox Xvfb libXfont Xorg jre
sudo mkdir /usr/lib/selenium /var/log/selenium /var/log/Xvfb
sudo wget -O /usr/lib/selenium/selenium-server.jar http://selenium-release.storage.googleapis.com/3.4/selenium-server-standalone-3.4.0.jar
sudo pip install selenium“`

The dependencies are:

* __Firefox__, the browser on which we will run the GUI tests
* __libXfont__ and __Xorg__ to manage displays
* __Xvfb__
* __JRE__ to run the __selenium server__
* the __python selenium bindings__

Then when the test environment is set up, we start the selenium server and Xvfb
in the background:

“`bash
/usr/bin/java -jar /usr/lib/selenium/selenium-server.jar -host 127.0.0.1 >/var/log/selenium/selenium.log 2>/var/log/selenium/error.log
Xvfb :99 -ac -screen 0 1920x1080x24 >/var/log/Xvfb/Xvfb.log 2>/var/log/Xvfb/error.log“`

Finally, set the display environment variable to :99 (the Xvfb display) and run your tests:

“`bash
export DISPLAY=:99
./path/to/seleniumtests“`

The tests will run as if the VM was plugged to a display.

## Taking screenshots

With this headless setup, we can now run GUI tests on virtual machines within our
automated CI; but we need a way to visualize what happens in the GUI if a test
fails.

It turns out that the selenium bindings have a screenshot feature that we can use
for that. Here is how to define a decorator in python that will save a screenshot
if a test fails.

“`python
import functools
import os
import unittest
from selenium import webdriver

[…]

def snapshot_if_failure(func):
@functools.wraps(func)
def f(self, *args, **kwargs):
try:
func(self, *args, **kwargs)
except Exception as e:
path = ‘/tmp/gui/’
if not os.path.isdir(path):
os.makedirs(path)
screenshot = os.path.join(path, ‘%s.png’ % func.__name__)
self.driver.save_screenshot(screenshot)
raise e
return f

class MyGUITests(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.maximize_window()
self.driver.implicitly_wait(20)

@snapshot_if_failure
def test_login_page(self):

If test_login_page fails, a screenshot of the browser at the time of the exception
will be saved under /tmp/gui/test_login_page.png.

Video recording

We can go even further and record a video of the whole testing session, as it
turns out that ffmpeg can capture X sessions with the “x11grab” option. This
is interesting beyond simply test debugging, as the video can be used to illustrate
the use cases that you are testing, for demos or fancy video documentations.

In order to have ffmpeg on your test node, you can either add
compilation steps to the
node’s post-install script or go the easy way and use an external repository:

# install ffmpeg
sudo rpm –import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro
sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm
sudo yum update
sudo yum install -y ffmpeg

To record the Xfvb buffer, you’d simply run
bash
export FFREPORT=file=/tmp/gui/ffmpeg-$(date +%Y%m%s).log && ffmpeg -f x11grab -video_size 1920×1080 -i 127.0.0.1$DISPLAY -codec:v mpeg4 -r 16 -vtag xvid -q:v 8 /tmp/gui/tests.avi

The catch is that ffmpeg expects the user to press q to stop the recording
and save the video (killing the process will corrupt the video). We can use
tmux to save the day; run your GUI tests like so:

export DISPLAY=:99
tmux new-session -d -s guiTestRecording ‘export FFREPORT=file=/tmp/gui/ffmpeg-$(date +%Y%m%s).log && ffmpeg -f x11grab -video_size 1920×1080 -i 127.0.0.1’$DISPLAY’ -codec:v mpeg4 -r 16 -vtag xvid -q:v 8 /tmp/gui/tests.avi && sleep 5′
./path/to/seleniumtests
tmux send-keys -t guiTestRecording q

Accessing the artifacts

Nodepool destroys VMs when their job is done in order to free resources (that is,
after all, the spirit of the cloud). That means that our pictures and videos will
be lost unless they’re uploaded to an external storage.

Fortunately Software Factory handles this: predefined publishers can be appended
to our jobs definitions; one of
which
allows to push any artifact to a Swift object store. We can then retrieve our
videos and screenshots easily.

Conclusion

With little effort, you can now run your selenium tests on virtual hardware as
well to further automate your CI pipeline, while still ensuring human supervision.

Further reading

This article
helped a lot in setting up our selenium environment.
If you want to run your tests on docker containers rather than VMs, this article
explains how to configure Xvfb for
that.
Apparently Selenium can run on headless Windows VMs as well,
although I have not tested this.

Quelle: RDO

VR: Google kündigt Update für Daydream an

Auf seiner Entwicklerkonferenz hat Google Daydream 2.0 angekündigt, ein Update, das das VR-System verbessern und auf die kommenden Standalone-Headsets vorbereiten soll. Neu sind unter anderem ein Dashboard sowie die Unterstützung von Google Cast und des Chrome-Browser. (I/O 2017, Google)
Quelle: Golem

Here's Google's New Push To Make Virtual Reality Better And More Accessible

At Google IO on Thursday, the company announced a series of efforts aimed at making the immersive digital worlds of virtual reality (VR) and augmented reality (AR) more practical and universally accessible.

Standalone Daydream (Google's VR brand) headsets are coming, with a reference design for that headset that other companies could follow. The first partners are HTC and Lenovo. Previously, Google’s efforts have required a Daydream-ready phone that is inserted into a viewer. Currently, there are only 8 such phones on the market, although a few more, including Samsung's Galaxy S8, are on the way.

The standalone headset experience is more like what you currently get in an Oculus or HTC Vive headset in that it responds not just to turning your head, but to physical movement as well. You can not only look all around, you can get up and walk — at least short distances. You can peer around corners and see parallax movement of spatial objects as you explore VR worlds.

Google

But the major difference is that Google’s solution doesn’t require additional devices to be set up — you don’t need to connect it to a computer, or to set up towers that track and sense motion. Instead, it tracks motion with the device itself. This relies on something called WorldSense (which builds on the company’s Tango indoor mapping and spatial awareness technology) to achieve this via the device all on its own.

The company also showed off Seurat, named for the painter, a new developer tool that’s meant to help create vivid, high resolution graphics, and video that runs in real time. The idea is that this will allow developers to create ever more-realistic worlds, even on a mobile unit that doesn’t have the power of a connected desktop machine.

There were other developer-oriented announcements as well, designed to make it easier and more attractive to develop for Google’s platform, as well as things designed to make VR a less solitary experience, like new sharing tools, ways to project what you see in a device onto a TV, or ways to watch VR video in YouTube with other people.

Google

Yet the announcements themselves were almost less interesting than the weight Google is putting behind its push. For the second year in a row, the company devoted much of the time and resources of its annual developer’s conference to pushing VR and AR (or as Clay Bavor, who heads the company’s efforts, calls the combination of the two and the spectrum they lie upon, immersive computing).

There is still a long way to go before these immersive computing experiences are mainstream. But, especially when taken alongside Facebook’s similar push, we are clearly entering an era of new interfaces and inputs — away from the keyboard and touchscreen and into an era that’s more guided by what we see around us, the things we hear and say, and the way we physically move through the world.

The current devices meant to achieve almost all of this are, well, clunky. But a picture is starting to emerge of where this all is going, and it’s a place where the devices themselves fade away, and we begin to interact with them in more natural, human ways. We will see and hear, talk and gesture. And sometimes even type.

Quelle: <a href="Here's Google's New Push To Make Virtual Reality Better And More Accessible“>BuzzFeed

Dancing at the Lip of a Volcano: The Kubernetes Security Process – Explained

Editor’s note: Today’s post is by Jess Frazelle of Google and Brandon Philips of CoreOS about the Kubernetes security disclosures and response policy. Software running on servers underpins ever growing amounts of the world’s commerce, communications, and physical infrastructure. And nearly all of these systems are connected to the internet; which means vital security updates must be applied rapidly. As software developers and IT professionals, we often find ourselves dancing on the edge of a volcano: we may either fall into magma induced oblivion from a security vulnerability exploited before we can fix it, or we may slide off the side of the mountain because of an inadequate process to address security vulnerabilities. The Kubernetes community believes that we can help teams restore their footing on this volcano with a foundation built on Kubernetes. And the bedrock of this foundation requires a process for quickly acknowledging, patching, and releasing security updates to an ever growing community of Kubernetes users. With over 1,200 contributors and over a million lines of code, each release of Kubernetes is a massive undertaking staffed by brave volunteer release managers. These normal releases are fully transparent and the process happens in public. However, security releases must be handled differently to keep potential attackers in the dark until a fix is made available to users.We drew inspiration from other open source projects in order to create the Kubernetes security release process. Unlike a regularly scheduled release, a security release must be delivered on an accelerated schedule, and we created the Product Security Team to handle this process.This team quickly selects a lead to coordinate work and manage communication with the persons that disclosed the vulnerability and the Kubernetes community. The security release process also documents ways to measure vulnerability severity using the Common Vulnerability Scoring System (CVSS) Version 3.0 Calculator. This calculation helps inform decisions on release cadence in the face of holidays or limited developer bandwidth. By making severity criteria transparent we are able to better set expectations and hit critical timelines during an incident where we strive to:Respond to the person or team who reported the vulnerability and staff a development team responsible for a fix within 24 hoursDisclose a forthcoming fix to users within 7 days of disclosureProvide advance notice to vendors within 14 days of disclosureRelease a fix within 21 days of disclosureAs we continue to harden Kubernetes, the security release process will help ensure that Kubernetes remains a secure platform for internet scale computing. If you are interested in learning more about the security release process please watch the presentation from KubeCon Europe 2017 on YouTube and follow along with the slides. If you are interested in learning more about authentication and authorization in Kubernetes, along with the Kubernetes cluster security model, consider joining Kubernetes SIG Auth. We also hope to see you at security related presentations and panels at the next Kubernetes community event: CoreOS Fest 2017 in San Francisco on May 31 and June 1.As a thank you to the Kubernetes community, a special 25 percent discount to CoreOS Fest is available using k8s25code or via this special 25 percent off link to register today for CoreOS Fest 2017. –Brandon Philips of CoreOS and Jess Frazelle of GooglePost questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackGet involved with the Kubernetes project on GitHub
Quelle: kubernetes

Boost IBM WebSphere with IBM Cloud Product Insights

This needs some serious thought
Rapid deployment is the name of the game for IBM WebSphere Application Server (WAS). Speed is critical for companies that will spin up and down WebSphere instances to accommodate the agility required by many types of projects. So how can an IT team manage and support and keep track of all the different ways they’re using WAS?
You can use IBM Cloud Product Insights, a new software as a service (SaaS) offering that can help support product inventory management and show some of the essential usage metrics for each WebSphere instance.
Let’s say you are a product inventory controller or capacity planner responsible for keeping track of which products are used across your company. What types of reports do you get from your IT team? Are they automated and accurate, or is there a worrying margin of error that can be introduced with manual tracking? Are you aware of the WebSphere versions being used? Have the latest fixpacks been installed to ensure the reliability and security of your environment?
With the latest support for IBM WebSphere there is now built in functionality that connects to IBM Cloud Product Insights. You can track the inventory all your WebSphere instances in a single dashboard.
IBM Cloud Product Insights was built to help alleviate inventory and tracking issues for rapidly changing IT infrastructure. In this way, it also facilitates extending WebSphere products to a hybrid cloud infrastructure. You can take advantage of the flexibility and resiliency of the Cloud and potentially forego buying additional licenses or acquiring new hardware. After deploying WebSphere to the cloud, IBM Cloud Product Insights will automatically update its dashboard view of WebSphere deployments, providing a dynamic and accurate view of your WebSphere environment.
Beyond inventory, there are also several key usage metrics that provide a high-level view of how these WebSphere instances are being used across your company. The intent is not to replace robust monitoring products, but to provide essential metrics—and no-cost.
You can see CPU and memory use by hardware, either real or virtual as well as servlet requests handled, giving you a view of WAS usage. These metrics provide enough of a usage indicator to understand how deployed WebSphere instances are being used, along with an indication of whether you might need to take a deeper look at performance issues.
Of course, none of this would be of value if you couldn’t continue to guarantee the security of your environment and the privacy of your data. IBM Cloud Product Insights provides gateway support for infrastructure running behind your company firewall. You also get the ability to audit all deployment and usage data sent to IBM Cloud Product Insights.
IBM Cloud Product Insights is available with WAS V8.5.5 or V9.0 and supports both traditional WAS and Liberty application servers. We encourage you to explore the possibilities.
The post Boost IBM WebSphere with IBM Cloud Product Insights appeared first on Cloud computing news.
Quelle: Thoughts on Cloud