Epic Build 2018 Playlists

Your one-stop shop for Microsoft Build 2018 sessions, slide decks, post links, keynotes and much more.

Living Document Note:

These Playlists will continue to be reordered and updated in real time as new content becomes available. However, the links will always remain the same and you can refresh the playlist page to get any updated ordering/content.


Migrating from PCLStorage to .NET Standard 2.0

If you’re a Xamarin Forms developer, you’ve likely used PCLStorage (or other Dependency Service) to interact with the target platform’s file system in the portable class library’s code. However, since November 2017, Xamarin.Forms now uses a .NET Standard 2.0 class library and PCLStorage is no longer supported.


This isn’t a problem because in .NET Core 2.0 you now have access to System.IO.File’s GetFolderPath and SpecialFolder methods (see System.IO.File in .NET Core 2.0 docs).

Thus, you can get a path to the app’s local folder (the one that the app can save to) without having to write a Dependency Service (which is what PCLStorage does) by using:

var localFolder = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);

With a reference to the localFolder path, you can Combine it with the file name:

var notes = File.ReadAllText(Path.Combine(LocalFolder, "notes.txt"));

Functional Demo

As a very simple example, I created a couple extension methods for Stream and Byte[] in the below FileExtensions class.

To test it, I created a ContentPage that downloads an image, saves it using the extension method and set a FileImageSource to confirm that it’s a file (instead of just using a StreamImageSource).

Note that the extension methods are very basic and shouldn’t be used in production as-is (i.e. no defensive programming code).

Here is the result at runtime on UWP:


Windows 10 on ARM: Day-One Developer Experience

I was given the honor of having early access to Windows 10 on ARM (WoA) so that I could chronicle the experience and provide you with tips to ensure that your development experience is smooth. I’ll go through a list of potential issues you might encounter, along my fix for each.

What this post is:

This article is intended to be a quick guide on how to set up the device so that you can deploy and debug your apps. I will be writing this from a UWP developer’s perspective, sharing the experience of deploying and debugging this real time video effects test app.

Several of my fellow MVPs also got a device and will be writing posts about their experiences as well. Some articles will be UWP specific and others win32 specific, see the Additional Resources section at the bottom of this post.

What this post is not:

This will not be a “product review” style post about the device or the OS itself. I will say this: I am very impressed, it’s just Windows 10 and I really wouldn’t be able to tell it was on ARM until I realized that it’s been 2 weeks since I last charged it 🙂

Okay, enough intro talk, let’s get our hands dirty. First, a little primer on x86 emulation, an explanation of how Microsoft was able to get Windows on ARM and how your app will work.

How Windows on ARM works

Chances are you’re using a 32 bit application right now, but your PC is running a 64 bit version of Windows running on an x64 CPU. How does an application with 32 bit instructions work on a CPU that expects 64 bit instructions? This is accomplished using x86 emulation, provided by the WOW64 layer in Windows (WOW64 stands for Windows 32bit on Windows 64bit).

To create Windows on ARM, Microsoft ported Windows 10 to ARM64! The kernel, shell, in-box drivers and in-box apps all run native on ARM. What about your apps? Windows on ARM has x86 Win32 emulation!

The WOW abstraction layer, in concert with a custom x86-to-ARM emulator and CHPE dlls (x86 assemblies with ARM64 code in them, pronounced “chip-ee”), your x86 application can run in an emulated process on an ARM CPU.

Here’s a diagram to better explain it


Some key features

  • The x86 app win32 app runs unmodified
  • The app will installs and run like it does any other PC
  • The x86 instructions are translated to ARM64 at runtime and they get cached so that future app runs are faster!

Application Support

Windows on ARM supports ARM32 and x86 applications

  • Store apps (ARM32 preferred, x86 supported)
  • Win 32 apps (x86 supported, x64 not supported).

Tip: If your Store application has an ARM32 package available, it will be installed and will run natively as this provides the best user experience. Where possible, make sure all of your Store apps also have an ARM32 package.

If you’re not sure how to do this, when you go to package your app, you’ll see three checkboxes, make sure ARM is selected:


For win32 apps, you can use NGEN (see #2 under the General Tips section below).


There are some limitations at this time

  • No x64 emulation support
  • No kernel mode support
  • Apps that rely on drivers may not work (the driver developers should start updating their drivers for ARM support)
  • Apps that inject code into Windows processes won’t work

For more info about Windows on ARM, take a look at Kevin Gallo’s Community Standup video here and the Build 2017 session from Hari and Arun here.

Okay, now with a high level look at how WoA works, let’s dig into my experience and how I hope it helps you

Familiar Ground – Remote Debugging

If you’ve used Visual Studio’s remote debugging tools before, for example with Hololens or Windows IoT, then you’ll feel right at home. The process is essentially the same, but with a couple things to keep an eye out for when you’re first getting started.

The first thing you need to do is enable Developer Mode in the device’s PC Settings > Updates and Security > For Developers page. Not only so that you can deploy apps to the device, but also to enable the Remote Debugging tools.

Here’s a screenshot of the PC Settings page:


However, you may not see this option, which brings me to my first point.

Potential Issue #1

My device came with Windows 10 S out of the box, this means that I will not see a “For Developers” item on the Settings > Update and Security page (screenshot above).


Upgrade the OS. I used a product key for Windows 10 Pro in the Settings > System > About page, but you can use whichever option is available to you.


After updating you’ll now see the Developer Mode option on the For Developers page, toggle that to move forward. Enabling this will trigger a download an installation of a Developer Mode package that contains the Remote Debugging Tools.

Potential Issue #2

You may experience an issue where the developer mode package didn’t install correctly, you should see a “Remote Debugging Tools Installed” success message. If you didn’t or see an error message, this means that you will not have the tools on that machine and you’ll get frustrated when trying to deploy from Visual Studio.


Download and install the tools manually, you can find the download here. Alternatively, you can copy over the tools from your dev machine if you’re familiar with the steps, but installing the tools is the most straightforward option: run and done.

Targeting the Remote Device

If you’re not familiar with using the Remote Debugger, I strongly urge you to read this Microsoft documentation article, but let’s go through the main steps.

Changing Target Device to Remote Machine

If you open your application’s Properties page in Visual Studio, and switch to the Debugging tab, you’ll see that you have a Target Device drop down.


Switch that to Remote Device, then the “Find…” button on the right will become enabled. After clicking the Find… button, you’ll be presented with the following UI:


Now you have an opportunity to select a remote device, but you will most likely see that the ARM PC isn’t showing as an available device to target.

There are two ways to solve this, use the device’s IP address or let “Auto Detected” find it for you. In both cases, the device needs to be visible on the network, but Auto Detected is far better because it helps with setting Authentication Mode.

Potential Issue #3

No devices are in the Auto Detected list or you can’t connect using IP address.


You need to make sure that the both the development PC and the ARM PC need to be on the same network and that Network Discovery is enabled.

The quickest way that I like to do this (there are other ways), is just to use File Explorer and selecting “Network” in the pane and follow the prompts.. Easy Button style. Here are some screenshots to guide you:


Once you click the OK button, File Explorer will show a yellow bar at the top:


Click that bar and follow the prompts, it will ask you to make the network you’re on a Private Network (that’s my preferred option as it’s the safest). You’ll now see that the PC will scan the network, finding other devices and listing them in File Explorer. Go ahead and close File Explorer now that it has done its job.

NOTE: If you need to, repeat this process for your development PC.


With both PCs visible on the Network, reopen the Remote Connections dialog window again (using the “Find…” button) and check the Auto-Detected list for the ARM PC. Select it, then follow the steps to connect and pair it.

Potential Issue #4

You’re not able to see the device in the Auto-Detected list, and you’ve hit Refresh button to scan again.


You can instead use the ARM device’s IP address and select “NONE” for the authentication option. When selecting this route, you’ll be shown a PIN number to pair to Visual Studio.

I haven’t been able to get Universal Authentication to work with manual IP address connection yet, but my follow MVP testers have and their blogs posts may touch on this further.

Congrats, now you’re ready to deploy!

Debugging & Deploying

Before you click F5 (or that shiny green start debugging button) for the first time, let’s review another issue you will encounter that might you might miss. I didn’t see it the first time because I was looking at Visual Studio instead of the device.

Potential Issue #5

When debugging, you do not see any runtime analytics / debugger metrics in Visual Studio Diagnostic Tools window (usually at the top right):



Keep an eye on the ARM device for an elevated permissions UAP prompt the first time you deploy. You may only see a TaskBar yellow-flashing icon and not a full screen prompt, click the icon to see the prompt and allow it.

General Tips

You’re going to be quite surprised at how well everything just works, In several cases, I found my apps worked faster on the ARM device than my dev PC!

Tip 1 – Failure to launch

When remote deploy does’t work, look at the Visual Studio build output window. Close to the end of the build output, you’ll probably see a reason why the deploy failed.

You’ll most likely see one of these errors (click the link to view the respective troubleshooting article):

Issue 2 – Slow first launch

When using an x86 application on Windows 10 on ARM, the first time an app launches it will be a little slower. This is because the x86 to ARM translation will occur the first time, but the result is cached on disk so the future launches are significantly faster.

Also, while waiting for that first launch, don’t try to relaunch it again. Otherwise, you’ll end up opening multiple instances of the app that will further slow things down (it can take 5-30 seconds depending on how much translation needs to be done). I’ve communicated my feedback to Microsoft that there should be some sort of indication that the app is starting up (e.g. a flashing taskbar icon).


  • If the app is a Store app, provide an ARM package so that it runs natively and doesn’t need emulation.
  • If the app is a win32 app, try using NGEN.

Wrapping Up

This post will be a living document for a little while, I’ll come back and update the post with anything new I find, and continue to share my tips with you. I’ll be sharing my fellow MVP’s posts as they become, see them under the Additional Resources paragraph below.

I hope you enjoy running your x86 apps on an ARM device as much as I have, it’s a wonderful future for mobility and battery life where you don’t have to sacrifice your library of applications because of a CPU’s architecture.

  • Anything I can help with?
  • Are you stuck trying to deploy your app?

Leave a comment below or reach out to me on Twitter here (please remember that I can only answer development-specific questions).

Additional Resources

I will update this list as more posts from my fellow MVPs, and Microsoft documentation, becomes available.

MVP Posts

Microsoft Articles


Creating 3D Icons for your Mixed Reality UWP app

Microsoft has opened up the ability to have a 3D app launcher to all developers in the Fall Creator’s Update. In this blog post, I’m going to show you how to build your own 3D app launcher from scratch (accompanying YouTube tutorial video embedded below).

What is a 3D app launcher?

App Launchers

Up until now, you’ve only been able to have a regular 2D tile in the Start menu and the user could place a 2D frame of your app on a surface in the Cliff House (the Mixed Reality home) or on a surface in your HoloLens mapped area. Clicking the app frame would launch the app in the same frame.

If your app was a 3D app, then it would launch the immersive experience, but the launcher was just a 2D . The user wouldn’t be able to intuitively know that the application is an immersive app. There are some apps that have 3D launchers, for example the HoloTour app that you see in this article’s featured headline image.

Wouldn’t it be nice if your immersive application had a 3D launcher, too? By the end of this blog post, you’ll be able to! To get started, let’s take a look at the model and how to build one yourself.

The Model

First thing you’ll need to understand is that Microsoft requires your model to use gITF 2.0 specification and more specifically, the binary format (.glb). To accomplish this, we are going to use Blender.

Blender is a modeling and animation application that is open source and free. You can choose to build your model directly in Blender if you’re familiar with it. Alternatively, build the model in another application but use the Blender gITF Exporter add-on, which is what I’ll show you today.

NOTE: If you already have an FBX model, jump to the Importing a Model section below

Building The Model From Scratch

To keep this tutorial simple, I’ll create a simple UVSphere in Blender and use a solid color texture. Creating complex models in Blender is a bit out of the scope of this post, however I cover doing it from scratch in this video (if the video doesn’t load, here’s the direct link).

Just be sure to read the next section, even if you followed the video, so that you’re familiar with the restrictions you’ll inevitability encounter while trying to use different models.

Importing a Model

Alternatively, you can import other model types, like FBX, into Blender. This is easy, as FBX importing is available out-of-the-box .Take following steps once you’ve opened Blender

  1. Select File
  2. Expand Import
  3. Select FBX
  4. Locate and load the file

One thing you’re going to notice is that model may not be visible in your area, it’s far too large and off center. To get it in your view, you can use the “View All” shortcut (Home key) or drill into the menu “View > View All” (this menu is underneath the 3D view area). Here’s a screenshot:

Now, look in the Outliner panel (located at the top right of Blender) and find the object named “root” this is the imported model. Then, to get the right editing options, select the Object properties button (see the red arrow in this screenshot).

Outliner and Object properties editor

Take note of the highlighted Transform properties, we’ll change those next. However, before we do, let’s review some of the guidelines Microsoft has set for 3D app launchers:

  1. The Up axis should be set to “Y”.
  2. The asset should face “forward” towards the positive Z axis.
  3. All assets should be built on the ground plane at the scene origin (0,0,0)
  4. Working Units should be set to meters and assets so that assets can be authored at world scale
  5. All meshes do not need to be combined but it is recommended if you are targeting resource constrained devices
  6. All meshes should share 1 material, with only 1 texture sheet being used for the whole asset
  7. UVs must be laid out in a square arrangement in the 0-1 space. Avoid tiling textures although they are permitted.
  8. Multi-UVs are not supported
  9. Double sided materials are not supported

With these in mind, let’s start editing our mesh, under the Transform section, take the following actions:

  1. Set Location to 0,0,0
  2. Set Scale to 1,1,1
  3. Now, lets re-frame the 3D view so we can see the model by using the “View All” shortcut again.

You should see that your model is now at the right place and close to the right scale. Now that you can see what you’re doing, make any additional tweaks so that your model meets #1 and #2 of the Microsoft guidelines.

Lastly, we need to check the model’s triangle count, there is a limit of 10,000 triangles for a 3D app launcher (you can see the triangle count in the top toolbar when the model is selected). Here’s what it looks like:

Mesh Triangle Count

If you need to reduce your triangle count, you can use the Decimate Modifier on your model. Go here to learn more on how to use Decimate (I also recommend checking out a couple YouTube videos on the topic, Blender is complex app).

I strongly urge you to go to this documentation and read all the model restrictions and recommendation, such as texture resolutions and workflow. If you use a model that doesn’t meet the guidelines, you’ll see a big red X like this:

Invalid Model

Now that your model is done, it’s time to export it as a gITF binary file.

Exporting GLB

By default, Blender doesn’t have a gITF export option, so you’ll want to use the KhronosGroup glTF-Blender-Exporter. Installation of the add-on is pretty straight forward, go here to read the instructions.

You get to choose between two options to add it:

  • Option 1: Set Blender to use repo’s scripts folder (to stay in sync with the exporter’s development)
  • Option 2: Copying a folder into Blender’s folders (I chose this option, scroll down to where the author starts a sentence with “Otherwise”)

Finally, enable the add-on in Blender (last step in the instructions). Once the add-on is enabled, go ahead and export your model! You’ll see a glb option in the File > Export list.

Here’s a screenshot:

Export as glb

UPDATE Jan 2018

You can now pack your more complex textures using a new tool released by Microsoft a few weeks ago! It takes away the requirement to be a shader ninja by importing your existing glb, packing the textures properly and exports an updated glb that will work in the Mixed Reality home.

Go here to get the converter and read how to use it https://github.com/Microsoft/glTF-Toolkit

Setting the 3D Launcher

Now that you have a glb file, it’s time to open your Mixed Reality UWP app in Visual Studio. Once it’s loaded, we need to add the glb file to your app’s Assets folder (right click on the folder and select “Add > Existing Item”). Once it’s been pulled in make sure you set the Build Action to Content (right click on the file, select Properties and change Build Action to content).

File’s Build Action

Lastly, we need to edit the app’s Package.appxmanifest file XML manually, to do this, right click on the file and select “View Code”. At the top of the file, add a new xmlns and also put it in the IgnorableNamespaces list


IgnorableNamespaces="uap uap2 uap5 mp"

Next, locate the DefaultTile tag (under Application > VisualElements), expand it and add the MixedRealityModel option with your

<uap:DefaultTile ShortName="Channel9 Space Station" Wide310x150Logo="Assets\Wide310x150Logo.png" >
<uap5:MixedRealityModel Path="Assets\Dog.glb" />

Here’s a screenshot, with the additions highlighted:

Package.appxmanifest changes

Final Result

You can see the final result at the end of the video above. Keep in mind that I keep the triangle count down for this, but next time I’ll likely increase it to 64 segments and 32 rings. Additionally, I’ll use a texture that can be mapped around a sphere (the Earth for example).

If you’re having trouble with your model and want to check your app settings with a known working mode, download the glb I created for here. I hope this tutorial was helpful, enjoy!

More Resources

“Cannot download Windows Mixed Reality software” error fix!

Did you just get a new Mixed Reality headset? Were you so excited that you ripped open the box and plugged it in only to to find that after setting up  your floor and boundaries that you got the following error:

Cannot download Windows Mixed Reality software

Screeching halt!

I spent a lot of time digging around the Hololens forums and long conversations on the Holodevelopers Slack and it seemed there was a wide variety of reason for this. However, after looking at my Developer Mode settings page (in Windows Settings), there was an incomplete dev package installation.

At this point, I suspected I needed to “side load” these packages, bypassing the on-demand download over network. I just didn’t know where to find it until… my hero,  and holographic Jedi, Joost van Schaik (@localjoost) had the same problem and found a fix for his. Joost followed a suggestion from Matteo Pagani (@qmatteoq) to use dism to install the packages manually.

I tweaked his solution (basically just found different packages) so that that it worked for a non-Insider Preview build and it worked!


It turns out that you can get the ISO file for the On-Demand-Features for your version of Windows 10 and install the packages manually.

Here are the steps:

1 – Go to the appropriate downloads page for your version of Windows 10 (these links use my.visualstudio.com, you may need to use whatever download

  • Go here if you’re running 1703 (Creator’s Update)
  • Go here if you’re running 1709 (Fall Creator’s Update)
  • Go here if you’re running 1803

2 – Download the Windows 10 Features on Demand file (ISO file) listed there. Note: There may be two ISOs offered for download, I found the cabs I needed in the 1st ISO.

2 – Mount the ISO file and make sure you see the following files (if you don’t, you got the wrong ISO):


3 – Open an elevated Command Prompt and run the following commands (replace “E:” with your mounted ISO’s drive letter)

— Install the holographic package (this is what the Mixed Reality Portal app is failing to download)

dism /online /add-package /packagepath:"[YOUR-DRIVE-LETTER]:\Microsoft-Windows-Holographic-Desktop-FOD-Package.cab"

— Then install the Developer Mode package

dism /online /add-package /packagepath:"[YOUR-DRIVE-LETTER]:\Microsoft-OneCore-DeveloperMode-Desktop-Package.cab"


Here’s a screenshot of the result



4 – Open the Mixed Reality Portal app again and bingo, success!!!



Underlying Cause of the Issue

After some discussion with the folks at Microsoft, it turns out that if your PC is using WSUS (Windows Update Service), which is normal for a domain-joined PC under a corporate domain policy, this can prevent the download of some components (like .NET 3.5, Developer Package and Holographic Package).

You can talk to your IT department and ask them to unblock the following KBs:

  • 4016509
  • 3180030
  • 3197985


BIG THANKS to Joost and Matteo 🙂


[Updated to add Matteo, fix some grammar and add the info about the KBs]

Easy XamlCompositionBrushBase

The upcoming Windows 10 update, the Fall Creator’s Update, introduces a new design system, Microsoft Fluent Design System. The design system has 5 major building blocks:

  • Light
  • Depth
  • Motion
  • Material
  • Scale

You can use any of these in a multitude of ways, however Microsoft has made it very easy to use in the latest Preview SDK (16190). Some of the things that used to be relatively hard, or at the least verbose, can now be done with a few lines of code.

Today, I want to show you the XamlCompositionBrushBase (aka XCBB). Before I do, let’s briefly run through a XAML Brush and a Composition Brush.

The XAML Brush

We use Brushes in almost everything we do, they paint the elements in an application’s UI. For example, UIElement.Foreground and UIElement.Background both are of type Brush.

The most commonly used Brush is the SolidColorBrush, an example of setting the Foreground brush:


<TextBlock x:Name="myTextBlock" Foreground="Black"/>


myTextBlock.Foreground = new SolidColorBrush(Colors.Black);

There are other brush types, such as ImageBrush, that are good for specific approaches, that make our live easier because it would otherwise require more work to achieve the same result.

The Composition Brush

A composition brush utilizes the relatively new Composition layer in Windows 10. This layer sits underneath the XAML layer where all your XAML elements live. You can do a lot with the composition layer, animations, effects and more. The mechanism that paints Composition layer elements also uses a “Brush”, however it’s a type of Composition Brush and cannot be used with your XAML directly.


With the release of the 16190 preview SDK, you can now use the Composition layer and XAML layer together by using the XamlCompositionBrushBase!

This is BIG news because the XCBB allows for interoperability between the Composition layer and the XAML layer and lets you set up composition effect without needing to implement a behavior or other more advanced setups. As an example, let’s create a Brush that applies the Win2D Invert Effect,

Note: I wanted to keep this as simple as possible to focus on XCBB, you can expand on this with more complex Effects, such as the GaussianBlur here.

First, let’s example the XCBB’s two methods that you want to override:

  • OnConnected
  • OnDisconnected

So, here’s our starting point:

public class InvertBrush : XamlCompositionBrushBase
    protected override void OnConnected()
        // Set up CompositionBrush

    protected override void OnDisconnected()
        //Clean up

Now, for the awesome part… the XCBB has a CompositionBrush property! All you need to do is instantiate your effect. Here’s the completed Brush code and I’ve broken it down to the important steps:

public class InvertBrush : XamlCompositionBrushBase
    protected override void OnConnected()
        // Back out if it's not ready yet
        if (CompositionBrush == null) return;

        // 1 - Get the BackdropBrush
        // NOTE: BackdropBrush is what is behind the current UI element (also useful for Blur effects)
        var backdrop = Window.Current.Compositor.CreateBackdropBrush();

        // 2 - Create your Effect
        // New-up a Win2D InvertEffect and use the BackdropBrush as it's Source
        var invertEffect = new InvertEffect
            Source = new CompositionEffectSourceParameter("backdrop")

        // 3 - Get an EffectFactory
        var effectFactory = Window.Current.Compositor.CreateEffectFactory(invertEffect);

        // 4 - Get a CompositionEffectBrush
        var effectBrush = effectFactory.CreateBrush();

        // and set the backdrop as the original source
        effectBrush.SetSourceParameter("backdrop", backdrop);

        // 5 - Finally, assign your CompositionEffectBrush to the XCBB's CompositionBrush property
        CompositionBrush = effectBrush;


    protected override void OnDisconnected()
        // Clean up
        CompositionBrush = null;

Now that the Brush’s definition is complete, how do we actually use it? This is the most exciting part… you use it like any other Brush in XAML!

        <brushes:InvertBrush />


Here’s an example implementation. I have an Ellipse with an ImageBrush and it’s currently showing Tuvok (full disclosure: I’m a Trekkie AND a Star Wars fan)

<Ellipse x:Name="ImageEllipse">
        <ImageBrush ImageSource="{Binding SelectedCharacter.ImagePath}" Stretch="UniformToFill" />

Sketch (2)

If I put another matching Ellipse using my custom InvertBrush on top of the Tuvok Ellipse, here’s the result:

<Ellipse x:Name="ImageEllipse">
        <ImageBrush ImageSource="{Binding SelectedCharacter.ImagePath}" Stretch="UniformToFill" />
        <strong><brushes:InvertBrush /></strong>

Sketch (3)

Notice how it only inverted what was directly behind the Ellipse and not the page background or the drop shadow?

Level Up

In the case of the InvertEffect, we don’t have any effect variables to update, so there’s no need for a DependencyProperty to set initial values of the effect. However, in most cases, you will need a DependencyProperty in your XCBB to tweak the effect’s values.

To see this, look at the BackdropBlurBrush example here and notice how the Blur.BlurAmount effect property can be updated by using a ScalarParameter when calling CreateEffectFactory.

I hope I was able to clarify how easy it is to get started using the XCBB and how this makes things easier for XAML devs to get the benefits of working with the Composition layer without doing a lot of work.

Happy coding!


Smart Shades with Windows IoT 1

I wanted to be able to have my office shades go up or down automatically, depending on the amount of light outside. Then I thought, why not take it to the next level with Windows IoT?  I could design a UI with the help of the recently open-sourced Telerik UI for UWP and also leverage a FEZ HAT’s & Raspberry Pi 3 to drive the motors, detect light levels and temperature.

Why would I want to also measure temperature? Because if it were too hot inside, I wouldn’t want to open the shades even if it were light outside.

I’ll go into detail about the software and hardware, but before I do: here’s the GitHub repo with the source code and lets watch a short video of the UI animation and motor in action.

About the Hardware

The hardware you see is a Raspberry Pi 3 (can also be a Pi 2), a FEZ HAT, and a 90 degree motor (like this one). That motor is connected to the FEZ HAT’s Motor A connection, however I also have a second smaller 1:150 motor connected to Motor B (I am still testing out torque of different motors to see which is small and powerful enough to raise a shade).

The FEZ HAT has a great built-in motor driver so that you only need to connect a more powerful external power source to drive the motors. This is the two black wires you see connected to the FEZ HAT on the left.


About the Software

I built the app to be scalable, so there’s a ShadeItemViewModel that contains all the logic to control each shade independently of each other, but still use the same FEZ HAT.

The UI you see is DashboardPage.xaml, it has a GridView at the top with an item for each ShadeItemViewModel. Under the GridView is the light and temperature measurements displayed in a Telerik RadGauge control with LinearGaugeIndicators.

Each item in the GridView has command buttons to open and close the shade, but also a custom control I made that indicates the current position of the shade.



In the ShadeItemViewModel there are several properties to help tweak the motor speed and for how long to run the motor. I considered using a servo for a while, but I’d need to build a very high ration gear set to turn the degrees of the servo rotations of the shade. It’s easier and more scalable to use time.

This way anyone can change the time and speed of the motor from the settings button to fit their shade’s length. Also, the way the “% complete” algorithm works, it will adapt to the current settings and give proper percentages and it opens or closes.

I’m still experimenting with the motors. I’ll be 3D printing up some housings, and likely gears, to connect the motor to the shade. Once that’s complete, I’ll be writing part 2 of this series and give you all an update with any additional STL files so you can print up the parts for the motors.

Until then, enjoy running/exploring the code and trying it out for yourself!