Oculus Just Slashed Its Prices, But VR Is Still Expensive

Oculus just cut the price of its Rift virtual reality headset from $599 to $499 and its controllers from $199 each to $99.

You can also buy a Rift and two controllers for a $598 bundle. The company, owned by Facebook, makes the Oculus Rift headset and the software that powers the Samsung Gear.

Oculus / Via oculus.com

The price cuts are meant to make the Rift more accessible to people turned off by its high cost. But $600 is still a big chunk of change. And to use an Oculus, you still need a computer with enough processing power and memory to download and run the games, and as mentioned on Oculus&; website, these kinds of computers often cost upwards of $1,000.

This doesn&039;t mean Oculus is failing.

Oculus won&039;t disclose how many Rifts it&039;s sold so far, but the New York Times reports that this kind of price cut isn&039;t indicative of struggling sales at Oculus. It&039;s more likely that, as the company smooths out its manufacturing and logistics, there are fewer recurring errors that bring up the average cost of a unit. The company wants to expand beyond its core audience of tech and gaming enthusiasts, so Oculus hopes the cuts will lure more people to virtual reality, according to the Times.

But its price isn&039;t helping.

Oculus has competition in the VR space. Sony recently announced it sold almost a million Playstation VR headsets just four months after the device&039;s debut in October 2016. Oculus has been on the market since March 2016. The Playstation VR, priced at $399, runs on the Playstation 4 console, also priced at $399.

Sony, which has sold 53 million Playstation 4 consoles, has an established lineage of dedicated partner studios and blockbuster titles. Oculus, much newer to the gaming industry, faces wariness about the return on investment for VR games. Oculus doesn&039;t have a game as big as Mario or Halo yet, though Oculus does plan to release one game per month from its internal studios in 2017.

In a blog post, Oculus executive Jason Rubin acknowledged that price seems be a determining factor in how well VR rigs sell. He notes that Playstation&039;s console headset is beating the Rift because of its competitive price, but that “even less expensive Mobile VR headsets, like our Gear VR device, are outselling Console VR.”

It&039;s true&; VR devices are mad expensive&033;

Giphy

The other powerful VR rig, the HTC Vive, will cost you $800 for a headset, two controllers, and two motion-sensing towers. And that price doesn&039;t even take into account the powerful PC you need tether it to. Vive said in a statement that it won&039;t match the Rift&039;s price.

Even Google Cardboard, just $15, relies on a smartphone with a data plan. Google also recently released the Google Daydream VR headset ($80), which currently only works with Google&039;s Pixel smartphone ($649).

So for some people, Oculus&039; price cuts still aren&039;t enough.

Quelle: <a href="Oculus Just Slashed Its Prices, But VR Is Still Expensive“>BuzzFeed

How to avoid getting clobbered when your cloud host goes down

The post How to avoid getting clobbered when your cloud host goes down appeared first on Mirantis | Pure Play Open Cloud.
Yesterday, while working on an upcoming tutorial, I was suddenly reminded how interconnected the web really is. Everything was humming along nicely, until I tried to push changes to a very large repository. That&;s when everything came to a screeching halt.
&;No problem,&; I thought.  &8220;Everybody has glitches once in a while.&8221;  So I decided I&8217;d work on a different piece of content, and pulled up another browser window for the project management system we use to get the URL. The servers, I was told, were &8220;receiving some TLC.&8221;  
OK, what about that mailing list task I was going to take care of?  Nope, that was down too.
As you probably know by now, all of these problems were due to a failure in one of Amazon Web Services&8217; S3 storage data centers.  According to the BBC, the outage even affected sites as large as Netflix, Spotify, and AirBnB.
Now, you may think I&8217;m writing this to gloat &; after all, here at Mirantis we obviously talk a lot about OpenStack, and one of the things we often hear is &8220;Oh, private cloud is too unreliable&8221; &8212; but I&8217;m not.
The thing is, public cloud isn&8217;t any more or less reliable than private cloud; it&8217;s just that you&8217;re not the one responsible for keeping it up and running.
And therein lies the problem.
If AWS S3 goes down, there is precisely zero you can do about it. Oh, it&8217;s not that there&8217;s nothing you can do to keep your application up; that&8217;s a different matter, which we&8217;ll get to in a moment.  But there&8217;s nothing that you can do to get S3 (or EC2, Google Compute Engine, or whatever public cloud service we&8217;re talking about) back up and running. Chances are you won&8217;t even know there&8217;s an issue until it starts to affect you &8212; and your customers.
A while back my colleague Amar Kapadia compared the costs of a DIY private cloud with a vendor distribution and with managed cloud service. In that calculation, he included the cost of downtime as part of the cost of DIY and vendor distribution-based private clouds. But really, as yesterday proved, no cloud &8212; even one operated by the largest public cloud in the world &8212; is beyond downtime. It&8217;s all in what you do about it.
So what can you do about it?
Have you heard the expression, &8220;The best defense is a good offense&8221;?  Well, it’s true for cloud operations too. In an ideal situation, you will know exactly what&8217;s going on in your cloud at all times, and take action to solve problems BEFORE they happen. You&8217;d want to know that the error rate for your storage is trending upwards before the data center fails, so you can troubleshoot and solve the problem. You&8217;d want to know that a server is running slow so you can find out why and potentially replace it before it dies on you, possibly taking critical workloads with it.
And while we&8217;re at it, a true cloud application should be able to weather the storm of a dying hypervisor or even a storage failure; they are designed to be fault-tolerant. Pure play open cloud is about building your cloud and applications so that they&8217;re not even vulnerable to the failure of a data center.
But what does that mean?
What is Pure Play Open Cloud?
You&8217;ll be hearing a lot more about Pure Play Open Cloud in the coming months, but for the purposes of our discussion, it means the following:
Cloud-based infrastructure that&8217;s agnostic to the hardware and underlying data center (so it can run anywhere), based on open source software such as OpenStack, Kubernetes, Ceph, networking software such as OpenContrail (so that there&8217;s no vendor lock-in, and you can move it between a hosted environment and your own) and managed as infrastructure-as-code, using CI/CD pipelines, and so on, to enable reliability and scale.
Well, that&8217;s a mouthful! What does it mean in practice?  
It means that the ideal situation is one in which you:

Are not dependent on a single vendor or cloud
Can react quickly to technical problems
Have visibility into the underlying cloud
Have support (and help) fixing issues before they become problems

Sounds great, but making it happen isn&8217;t always easy. Let&8217;s look at these things one at a time.
Not being dependant on a single vendor or cloud
Part of the impetus behind the development of OpenStack was the realization that while Amazon Web Services enabled a whole new way of working, it had one major flaw: complete dependance on AWS.  
The problems here were both technological and financial. AWS makes a point of trying to bring prices down overall, but the bigger you grow, incremental cost increases are going to happen; there&8217;s just no way around that. And once you&8217;ve decided that you need to do something else, if your entire infrastructure is built around AWS products and APIs, you&8217;re stuck.
A better situation would be to build your infrastructure and application in such a way that it&8217;s agnostic to the hardware and underlying infrastructure. If your application doesn&8217;t care if it&8217;s running on AWS or OpenStack, then you can create an OpenStack infrastructure that serves as the base for your application, and use external resources such as AWS or GCE for emergency scaling &8212; or damage control in case of emergency.
Reacting quickly to technical problems
In an ideal world, nobody would have been affected by the outage in AWS S3&8217;s us-east-1 region, because their applications would have been architected with a presence in multiple regions. That&8217;s what regions are for. Rarely, however, does this happen.
Build your applications so that they have &8212; or at the very least, CAN have &8212; a presence in multiple locations. Ideally, they&8217;re spread out by default, so if there&8217;s a problem in one &8220;place&8221;, the application keeps running. This redundancy can get expensive, though, so the next best thing would be to have it detect a problem and switch over to a fail-safe or alternate region in case of emergency. At the bare minimum, you should be able to manually change over to a different option once a problem has been detected.
Preferably, this would happen before the situation becomes critical.
Having visibility into the underlying cloud
Having visibility into the underlying cloud is one area where private or managed cloud definitely has the advantage over public cloud.  After all, one of the basic tenets of cloud is that you don&8217;t necessarily care about the specific hardware running your application, which is fine &8212; unless you&8217;re responsible for keeping it running.
In that case, using tools such as StackLight (for OpenStack) or Prometheus (for Kubernetes) can give you insight into what&8217;s going on under the covers. You can see whether a problem is brewing, and if it is, you can troubleshoot to determine whether the problem is the cloud itself, or the applications running on it.
Once you determine that you do have a problem with your cloud (as opposed to the applications running on it), you can take action immediately.
Support (and help) fixing issues before they become problems
Preventing and fixing problems is, for many people, where the rubber hits the road. With a serious shortage of cloud experts, many companies are nervous about trusting their cloud to their own internal people.
It doesn&8217;t have to be that way.
While it would seem like the least expensive way of getting into cloud is the &8220;do it yourself&8221; approach &8212; after all, the software&8217;s free, right? &8212; long term, that&8217;s not necessarily true.
The traditional answer is to use a vendor distribution and purchase support, and that&8217;s definitely a viable option.
A second option that&8217;s becoming more common is the notion of &8220;managed cloud.&8221;  In this situation, your cloud may or may not be on your premises, but the important part is that it&8217;s overseen by experts who know the signs to look for and are able to make sure that your cloud maintains a certain SLA &8212; without taking away your control.
For example, Mirantis Managed OpenStack is a service that monitors your cloud 24/7 and can literally fix problems before they happen. It involves remote monitoring, a CI/CD infrastructure, KPI reporting, and even operational support, if necessary. But Mirantis Managed OpenStack is designed on the notion of Build-Operate-Transfer; everything is built on open standards, so you&8217;re not locked in; when you&8217;re ready, you can take over and transition to a lower level of support &8212; or even take over entirely, if you want.
What matters is that you have help that keeps you running without keeping you trapped.
Taking control of your cloud destiny
The important thing here is that while it may seem easy to rely on a huge cloud vendor to do everything for you, it&8217;s not necessarily in your best interest. Take control of your cloud, and take responsibility for making sure that you have options &8212; and more importantly, that your applications have options too.
The post How to avoid getting clobbered when your cloud host goes down appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Announcing real-time Geospatial Analytics in Azure Stream Analytics

We recently announced the general availability of Geospatial Functions in Azure Stream Analytics to enable real-time analytics on streaming geospatial data. This will make it possible to realize scenarios such as fleet monitoring, asset tracking, geofencing, phone tracking across cell sites, connected manufacturing, ridesharing solutions, etc. with production grade quality with a few lines of code.

The connected car landscape and the turning of the automobile into a real-time data exhaust opens new avenues of business for automation, and post-sale monetization opportunities in industries such as insurance and content providers. NASCAR has been a pioneer in using geospatial capabilities in Azure Stream Analytics.

“We use real-time geospatial analytics with Azure Stream Analytics for analyzing race telemetry during and after the race,” said NASCAR’s Managing Director of Technology Development, Betsy Grider.

The new capabilities provide native functions that can be used in Azure Stream Analytics to compute geospatial operations such as the identification of geospatial data as points, lines, and polygons, computation of overlap between polygons, intersections between paths, etc. The ability to join multiple streams with geospatial data can be used to answer complex questions on streaming data.
We’ve adopted the GeoJSON standard for dealing with geospatial data. The new functions include:

CreatePoint – Identifies a GeoJSON point.
CreateLineString – Identifies a GeoJSON line string.
CreatePolygon – Identifies a GeoJSON polygon.
ST_DISTANCE – Determines the distance between two points in meters.
ST_OVERLAPS – Determines if one polygon overlaps with another.
ST_INTERSECTS – Determines if two line strings intersect.
ST_WITHIN – Determines if one polygon is contained inside another.

The ability to reason about geospatial data in motion using a declarative SQL like language Simplified queries for geospatial scenarios would look as follows:

Generate an event when a gas station is less than 10 km from the car:

SELECT Cars.Location, Station.Location
FROM Cars c
JOIN Station s ON ST_DISTANCE(c.Location, s.Location) < 10 * 1000

Generate an event when:

Fuel level in the car is lower than 10%
Gas stations have a promotion
The car is pointing to gas station

SELECT Cars.gas, Cars.Location, Cars.Course, Station.Location, Station.Promotion
FROM Cars c
JOIN Station s ON Cars.gas < 0.1 AND Station.Promotion AND ST_OVERLAPS(c.Location, c.course)

Generate an event when building is within a possible flooding zone:

SELECT Building.Polygon, Building.Polygon
FROM Building b
JOIN Flooding f ON ST_OVERLAPS(b.Polygon, b.Polygon)

Generate an event when a storm is heading towards a car:

SELECT Cars.Location, Storm.Course
FROM Cars c, Storm s
JOIN Storm s ON ST_OVERLAPS(c.Location, s.Course)

The integration with Power BI enables live visualizations of geospatial data on maps in real-time dashboards. It is also possible to use Geospatial functions for actualizing scenarios such as identifying and auctioning on hotspots and groupings and visualize data using heat maps on a Bing Maps canvas.

Live heat maps using machine learning and geospatial analytics can help unlock better business outcomes for ride-sharing and fleet management scenarios.

 

This video shows a fleet monitoring example built using the functionality detailed above.

Fleet monitoring with Geospatial functions in Azure Stream Analytics

The GeoSpatial Functions documentation page covers detailed documentation and usage examples. We are excited for you try out geospatial functions using Azure Stream Analytics.
Quelle: Azure

Snapchat's $24 Billion Valuation Sets A High Bar For Its Future

Snapchat CEO Evan Spiegel

Michael Kovac / Getty Images

Snapchat&;s parent company is expected to be valued at about $24 billion in its IPO, setting an high bar bar for a company that lost $515 million last year.

Investors will buy the company&039;s stock for an initial price of $17 a share, according to reports, creating expectations for dramatic growth in users and revenue at the company, which will spend the coming years trying to live up to those expectations.

Snap Inc, Snapchat&039;s parent company, brought in $405 million in revenue in 2016, while spending $925 million.

The $24 billion valuation is slightly above the range the company aimed for two weeks ago when its IPO plans were made public. It&039;s not uncommon for the price of shares in an IPO to rise slightly from the company&039;s initial estimate. About $3.4 billion worth of Snap shares will be sold, with existing investors taking home about $935 million from the sale and the company pocketing the rest. The sale will be the biggest tech IPO since the Chinese e-commerce giant Alibaba raised $22 billion in 2014.

At $24 billion, Snap is still considerably less valuable than the $104 billion Facebook was valued at when it went public in 2012, but more than twice Twitter&039;s current $11 billion valuation.

And speaking of Snapchat&039;s 140-character-cousin: Twitter has become a cautionary tale for what happens when a hot social media company goes public but can&039;t meaningfully grow its user base or live up to a sky-high valuation (Twitter&039;s market value hit almost $25 billion on its first day of trading). Some analysts and investors fear that the same fate could befall Snapchat, whose user growth has slowed down recently.

The company initially targeted a valuation in its IPO between $20 and $25 billion, the Wall Street Journal reported. At the end of 2016, the company calculated that the fair value of its shares was $16.33, according to its IPO filing.

Quelle: <a href="Snapchat&039;s Billion Valuation Sets A High Bar For Its Future“>BuzzFeed

This Startup Wants To Catch Cancer In Its Early Days

Haykirdi / Getty Images

A race is on among Silicon Valley startups to catch cancers before they turn deadly. Victory will come for the winners in a drop of blood — a sample serving as a long-sought “liquid biopsy” for cancer.

One contestant is Freenome of South San Francisco, California, which announced on Wednesday that it has raised $65 million to test their experimental liquid biopsy and move it closer to commercialization.

CEO Gabriel Otte says Freenome is most interested in preventing cancer in the first place. “We see these screenings as a catch-all, first line of defense you might be able to take, as easy as doing a yearly physical,” he told BuzzFeed News.

Freenome declined to comment on its valuation, but a source familiar with the startup told BuzzFeed News that it is worth about $210 million after the round. Vaunted venture capital firm Andreessen Horowitz led the investment. Other backers included Peter Thiel’s Founders Fund, Asset Management Ventures, Polaris, Data Collective, Eric Schmidt’s Innovation Endeavors, Spectrum 28, Charles River Ventures and Google Ventures.

The last is a notable investor since Google’s venture capital arm has backed Grail, a Freenome competitor. Grail, a spin-out of the DNA-sequencing giant Illumina, said in January that it plans to raise more than $1 billion to fund large-scale clinical trials.

The liquid biopsy side of the biotech industry is heating up: While Grail and Freenome are developing liquid biopsy tests that can detect early-stage cancer, others, led by Guardant Health, are pursuing tests that do something different: they track the progress of a patient’s diagnosed tumor.

These claims might sound similar to those of Theranos, which once said that it could test tiny vials of blood for dozens of conditions and has since had two of its lab licenses revoked. In contrast, Freenome and its kind are restricting their focus to cancer. The 25-person startup has not published data about how its technology works, but says that it will, along with the results of its current clinical trials, when those tests conclude.

Freenome cofounders CEO Gabriel Otte and Chief Operating Officer Riley Ennis.

Courtesy / Freenome

Traditionally, doctors and scientists look for cancer DNA in a tissue sample from an actual tumor — but a “tissue biopsy,” as it’s called, can be relatively expensive and invasive. Depending on where the tumor is in the body, it can require surgery.

“I hope that ultimately, if companies like Freenome are successful in really confirming their diagnostic potential, we might move toward an era where we obliviate the need for CT scans, PET scans, MRIs and other radiographic tests, which are all very, very costly,” said Sumanta Pal, an associate professor and co-director of the Kidney Cancer Program at the City of Hope Comprehensive Cancer Center, and an expert with the American Society of Clinical Oncology.

In contrast to such scans, liquid biopsy companies are seeking to harness a discovery that’s been around for less than a decade, which is that tiny amounts of DNA from cancerous cells are traceable in blood. The challenge, Pal says, is that liquid biopsies involve very small amounts of blood, and tiny, hard to detect DNA fragments.

But thanks to advancements in DNA-sequencing, these microscopic gene fragments are getting easier to observe. And Freenome wants to detect cancer by way of analyzing DNA fragments spewed out by immune cells when they die, which happens as often as hourly. Those fragments can contain changes that indicate the cells were trying to attack a tumor, Otte said.

Otte and his cofounder, Riley Ennis, say that when it comes to four types of tumors — breast, lung, colorectal, and prostate — their technology is more accurate than the standard screening test for each (mammograms for breast cancer, for instance). It’s been tested, the founders say, on “thousands” of samples from patients who thought they were healthy at the time of the biopsy, and went on to develop cancer within a year or two.

“We’re not necessarily saying our blood test is going to directly replace invasive biopsies,” Otte said. Rather, a tissue biopsy “might be the next step you might take if a blood test came back positive for a certain kind of cancer of a certain kind of tissue.”

Otte declined to estimate when the test will be available at your doctor’s, but the primary challenge for Freenome — and all other companies in the field — will be proving that it’s accurate. A handful of studies have compared the sequencing results of tissue and liquid biopsies that are currently on the market, and found that the tests can vary in which mutations they identify and which drugs they recommend.

A related and also significant hurdle will be getting insurance reimbursement for these tests, so patients aren’t scared away by a high price tag. Otte says that the company is in “early” conversations with commercial insurance payers and the Centers for Medicaid Services.

For Otte, the race to develop a test is especially personal: his father has prostate cancer and his grandfather, metastasized cancer. Otte watched both of them endure long waits between getting tests done and waiting for the results.

For his father, “there was an extended period of time where he wasn’t sure if he had cancer or not,” he said. “That’s frightening for anybody, and gives you an indication of how insufficient the diagnosis process is right now.”

LINK: These Ex-Googlers Want To Test You (And Your Family) For Cancer

LINK: Pregnant Women Are Finding Out They Have Cancer From A Genetic Test Of Their Babies

Quelle: <a href="This Startup Wants To Catch Cancer In Its Early Days“>BuzzFeed

What’s brewing in Visual Studio Team Services: March 2017 Digest

This post series provides the latest updates and news for Visual Studio Team Services and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure. With the rapidly expanding list of features in Team Services, teams can start to leverage it more efficiently for all areas of their Azure workflow, for apps written in any language and deployed to any OS.

Delivery Plans

We are excited to announce the preview of Delivery Plans! Delivery Plans help you drive alignment across teams by overlaying several backlogs onto your delivery schedule (iterations). Tailor plans to include the backlogs, teams, and work items you want to view. 100% interactive plans allow you to make adjustments as you go. Head over to the marketplace to install the new Delivery Plans extension. For more information, see our blog post.

Mobile Work Item Form Preview

We’re releasing a preview of our mobile-friendly work item form for Visual Studio Team Services! This mobile work item form brings an optimized look and feel that’s both modern and useful. See our blog post for more information.

Updated Package Management experience

We’ve updated the Package Management user experience to make it faster, address common user-reported issues, and make room for upcoming package lifecycle features. Learn more about the update, or turn it on using the toggle in the Packages hub.

Release Views in Package Management

We’ve added a new feature to Package Management called release views. Release views represent a subset of package-versions in your feed that you’ve promoted into that release view. Creating a release view and sharing it with your package’s consumers enables you to control which versions they take a dependency on. This is particularly useful in continuous integration scenarios where you’re frequently publishing updated package versions, but may not want to announce or support each published version.

By default, every feed has two release views: Prerelease and Release.

To promote a package-version into the release view:

Select the package
Click the Promote button
Select the view to promote to and select Promote

Check out the docs to get started.

Build editor preview

We’re offering a preview of a new design aimed at making it easier for you to create and edit build definitions. Click the switch to give it a try.

If you change your mind, you can toggle it off. However, eventually after we feel it’s ready for prime time, the preview editor will replace the current editor. So please give it a try and give us feedback.

The new editor has all the capabilities of the old editor along with several new capabilities and enhancements to existing features:

Search for a template

Search for the template you want and then apply it, or start with an empty process.

Quickly find and add a task right where you want it

Search for the task you want to use, and then after you’ve found it, you can add it after the currently selected task on the left side, or drag and drop it where you want it to go.

You can also drag and drop a task to move it, or drag and drop while holding the Ctrl key to copy the task.

Use process parameters to pass key arguments to your tasks

You can now use process parameters to make it easier for users of your build definition or template to specify the most important bits of data without having to go deep into your tasks.

For more details, see the post about the preview of our new build editor.

Pull Request: Improved support for Team Notifications

Working with pull requests that are assigned to teams is getting a lot easier. When a PR is created or updated, email alerts will now be sent to all members of all teams that are assigned to the PR.

This feature is in preview and requires an account admin to enable it from the Preview features panel (available under the profile menu).

After selecting for this account, switch on the Team expansion for notifications feature.

In a future release, we’ll be adding support for PRs assigned to Azure Active Directory (AAD) groups and teams containing AAD groups.

Pull Request: Actionable comments

In a PR with more than a few comments, it can be hard to keep track of all of the conversations. To help users better manage comments, we’ve simplified the process of resolving items that have been addressed with a number of enhancements:

In the header for every PR, you’ll now see a count of the comments that have been resolved.

When a comment has been addressed, you can resolve it with a single click.

If you have comments to add while you’re resolving, you can reply and resolve in a single gesture.

Automatic Github Pull Request Builds

For a while we’ve provided CI builds from your GitHub repo. Now we’re adding a new trigger so you can build your GitHub pull requests automatically. After the build is done, we report back with a comment in your GitHub pull request.

For security, we only build pull requests when both the source and target are within the same repo. We don’t build pull requests from a forked repo.

Extension of the Month: Azure Build and Release Tasks

This extension has really been trending over the last month and it’s not hard to see why. If you’re building and publishing your applications with Microsoft Azure you’ll definitely want to give this 4.5 star rated extension a look. It is a small gold mine of tasks to use in your Build and Release definitions.

Azure Web App Slots Swap: Swap two deployment slots of an Azure Web App
Azure Web App Start: Start an Azure Web App, or one of its slot
Azure Web App Stop: Stop an Azure Web App, or one of its slot
Azure SQL Execute Query: Execute a SQL query on an Azure SQL Database
Azure SQL Database Restore: Restore an Azure SQL Database to another Azure SQL Database on the same server using the latest point-in-time backup
Azure SQL Database Incremental Deployment: Deploy an Azure SQL Database using multiple DACPAC and performing incremental deployments based on current Data-Tier Application version
AzCopy: Copy blobs across Azure Storage accounts using AzCopy

Go to the Visual Studio Team Services Marketplace and install the extension.

There are many more updates, so I recommend taking a look at the full list of new features in the release notes for January 25th and February 15th.

Happy coding!
Quelle: Azure