This mixed reality has far-reaching potential; as a very basic example, imagine seeing annotations on your bicycle the first time you need to change a tyre. Or, in industry, how about using predictive analytics to paint each machine on a factory floor with the date that they next need maintenance?
The key to all these scenarios is that it's much easier to see what will be affected by a change when you can see it in context, rather than looking at a list and trying to match that to the real world in your head.

Take smart devices. Today you either have a whole interface on the device, complete with a screen and controls – which doesn't make sense for a smart lock or a window-blind motor – or you create an app, meaning users have to look at a different device.
HoloLens can paint the interface in mid-air next to the device (such as the robot above), giving you controls that are large enough to see clearly and work with. You can keep looking at the device to make sure the controls are actually doing what you want.
Checking when your robot vacuum cleaner is scheduled to start usually means crouching down to look at the screen; it would be much easier to tap it from across the room through your HoloLens, and see a calendar with a countdown timer showing when the cleaning starts. Want the robot to avoid the plant in the corner of your room? Tapping to set virtual fences is easier than trying to position an infrared wall in just the right place.
Next: creating a 3D world