Creating 3D Icons for your Mixed Reality UWP app

Microsoft has opened up the ability to have a 3D app launcher to all developers in the Fall Creator’s Update. In this blog post, I’m going to show you how to build your own 3D app launcher from scratch (accompanying YouTube tutorial video embedded below).

What is a 3D app launcher?

App Launchers

Up until now, you’ve only been able to have a regular 2D tile in the Start menu and the user could place a 2D frame of your app on a surface in the Cliff House (the Mixed Reality home) or on a surface in your HoloLens mapped area. Clicking the app frame would launch the app in the same frame.

If your app was a 3D app, then it would launch the immersive experience, but the launcher was just a 2D . The user wouldn’t be able to intuitively know that the application is an immersive app.  There are some apps that have 3D launchers, for example the HoloTour app that you see in this article’s featured headline image.

Wouldn’t it be nice if your immersive application had a 3D launcher, too? By the end of this blog post, you’ll be able to! To get started, let’s take a look at the model and how to build one yourself.

The Model

First thing you’ll need to understand is that Microsoft requires your model to use gITF 2.0 specification and more specifically, the binary format (.glb). To accomplish this, we are going to use Blender.

Blender is a modeling and animation application that is open source and free. You can choose to build your model directly in Blender if you’re familiar with it. Alternatively, build the model in another application but use the Blender gITF Exporter add-on, which is what I’ll show you today.

NOTE: If you already have an FBX model, jump to the Importing a Model section below

Building The Model From Scratch

To keep this tutorial simple, I’ll create a simple UVSphere in Blender and use a solid color texture. Creating complex models in Blender is a bit out of the scope of this post, however I cover doing it from scratch in this video (if the video doesn’t load, here’s the direct link).

Just be sure to read the next section, even if you followed the video, so that you’re familiar with the restrictions you’ll inevitability encounter while trying to use different models.

 

Importing a Model

Alternatively, you can import other model types, like FBX, into Blender. This is easy, as FBX importing is available out-of-the-box .Take following steps once you’ve opened Blender

  1. Select File
  2. Expand Import
  3. Select FBX
  4. Locate and load the file

One thing you’re going to notice is that model may not be visible in your area, it’s far too large and off center. To get it in your view, you can use the “View All” shortcut (Home key) or drill into the menu “View > View All” (this menu is underneath the 3D view area). Here’s a screenshot:

Now, look in the Outliner panel (located at the top right of Blender) and find the object named “root” this is the imported model. Then, to get the right editing options, select the Object properties button (see the red arrow in this screenshot).

Outliner and Object properties editor

Take note of the highlighted Transform properties, we’ll change those next. However, before we do, let’s review some of the guidelines Microsoft has set for 3D app launchers:

  1. The Up axis should be set to “Y”.
  2. The asset should face “forward” towards the positive Z axis.
  3. All assets should be built on the ground plane at the scene origin (0,0,0)
  4. Working Units should be set to meters and assets so that assets can be authored at world scale
  5. All meshes do not need to be combined but it is recommended if you are targeting resource constrained devices
  6. All meshes should share 1 material, with only 1 texture sheet being used for the whole asset
  7. UVs must be laid out in a square arrangement in the 0-1 space. Avoid tiling textures although they are permitted.
  8. Multi-UVs are not supported
  9. Double sided materials are not supported

With these in mind, let’s start editing our mesh, under the Transform section, take the following actions:

  1. Set Location to 0,0,0
  2. Set Scale to 1,1,1
  3. Now, lets re-frame the 3D view so we can see the model by using the “View All” shortcut again.

You should see that your model is now at the right place and close to the right scale. Now that you can see what you’re doing, make any additional tweaks so that your model meets #1 and #2 of the Microsoft guidelines.

Lastly, we need to check the model’s triangle count, there is a limit of 10,000 triangles for a 3D app launcher (you can see the triangle count in the top toolbar when the model is selected). Here’s what it looks like:

Mesh Triangle Count

If you need to reduce your triangle count, you can use the Decimate Modifier on your model. Go here to learn more on how to use Decimate (I also recommend checking out a couple YouTube videos on the topic, Blender is complex app).

I strongly urge you to go to this documentation and read all the model restrictions and recommendation, such as texture resolutions and workflow. If you use a model that doesn’t meet the guidelines, you’ll see a big red X like this:

Invalid Model

Now that your model is done, it’s time to export it as a gITF binary file.

Exporting GLB

By default, Blender doesn’t have a gITF export option, so you’ll want to use the KhronosGroup glTF-Blender-Exporter. Installation of the add-on is pretty straight forward, go here to read the instructions.

You get to choose between two options to add it:

  • Option 1: Set Blender to use repo’s scripts folder (to stay in sync with the exporter’s development)
  • Option 2: Copying a folder into Blender’s folders (I chose this option, scroll down to where the author starts a sentence with “Otherwise”)

Finally, enable the add-on in Blender (last step in the instructions). Once the add-on is enabled, go ahead and export your model! You’ll see a glb option in the File > Export  list.

Here’s a screenshot:

Export as glb

 

Setting the 3D Launcher

Now that you have a glb file, it’s time to open your Mixed Reality UWP app in Visual Studio. Once it’s loaded, we need to add the glb file to your app’s Assets folder (right click on the folder and select “Add > Existing Item”). Once it’s been pulled in make sure you set the Build Action to Content (right click on the file, select Properties and change Build Action to content).

File’s Build Action

Lastly, we need to edit the app’s Package.appxmanifest file XML manually, to do this, right click on the file and select “View Code”. At the top of the file, add a new xmlns and also put it in the IgnorableNamespaces list


xmlns:uap5="http://schemas.microsoft.com/appx/manifest/uap/windows10/5"

IgnorableNamespaces="uap uap2 uap5 mp"

Next, locate the DefaultTile tag (under Application > VisualElements), expand it and add the MixedRealityModel option with your


<uap:DefaultTile ShortName="Channel9 Space Station" Wide310x150Logo="Assets\Wide310x150Logo.png" >
<uap5:MixedRealityModel Path="Assets\Dog.glb" />
</uap:DefaultTile>

Here’s a screenshot, with the additions highlighted:

Package.appxmanifest changes

Final Result

You can see the final result at the end of the video above. Keep in mind that I keep the triangle count down for this, but next time I’ll likely  increase it to 64 segments and 32 rings. Additionally, I’ll use a texture that can be mapped around a sphere (the Earth for example).

If you’re having trouble with your model and want to check your app settings with a known working mode, download the glb I created for here. I hope this tutorial was helpful, enjoy!

More Resources

Advertisements

“Cannot download Windows Mixed Reality software” error fix!

Did you just get a new Mixed Reality headset? Were you so excited that you ripped open the box and plugged it in only to to find that after setting up  your floor and boundaries that you got the following error:

Cannot download Windows Mixed Reality software

Screeching halt!

I spent a lot of time digging around the Hololens forums and long conversations on the Holodevelopers Slack and it seemed there was a wide variety of reason for this. However, after looking at my Developer Mode settings page (in Windows Settings), there was an incomplete dev package installation.

At this point, I suspected I needed to “side load” these packages, bypassing the on-demand download over network. I just didn’t know where to find it until… my hero,  and holographic Jedi, Joost van Schaik (@localjoost) had the same problem and found a fix for his. Joost followed a suggestion from Matteo Pagani (@qmatteoq) to use dism to install the packages manually.

I tweaked his solution (basically just found different packages) so that that it worked for a non-Insider Preview build and it worked!

Fix

It turns out that you can get the ISO file for the On-Demand-Features for your version of Windows 10 and install the packages manually.

Here are the steps:

1 – Go to the appropriate downloads page for your version of Windows 10

  • Go here if you’re running 1703 (Creator’s Update)
  • Go here if you’re running 1709 (Fall Creator’s Update)

2 – Download the Windows 10 Features on Demand file (ISO file) listed there. Note: There may be two ISOs offered for download, I found the cabs I needed in the 1st ISO.

2 – Mount the ISO file and make sure you see the following files (if you don’t, you got the wrong ISO):

2017-08-03_1200

3 – Open an elevated Command Prompt and run the following commands (replace “E:” with your mounted ISO’s drive letter)

— Install the holographic package (this is what the Mixed Reality Portal app is failing to download)

dism /online /add-package /packagepath:"[YOUR-DRIVE-LETTER]:\Microsoft-Windows-Holographic-Desktop-FOD-Package.cab"

— Then install the Developer Mode package

dism /online /add-package /packagepath:"[YOUR-DRIVE-LETTER]:\Microsoft-OneCore-DeveloperMode-Desktop-Package.cab"

 

Here’s a screenshot of the result

2017-08-03_1136

 

4 – Open the Mixed Reality Portal app again and bingo, success!!!

2017-08-03_1206

 

Underlying Cause of the Issue

After some discussion with the folks at Microsoft, it turns out that if your PC is using WSUS (Windows Update Service), which is normal for a domain-joined PC under a corporate domain policy, this can prevent the download of some components (like .NET 3.5, Developer Package and Holographic Package).

You can talk to your IT department and ask them to unblock the following KBs:

  • 4016509
  • 3180030
  • 3197985

 

BIG THANKS to Joost and Matteo 🙂

 

[Updated to add Matteo, fix some grammar and add the info about the KBs]

Easy XamlCompositionBrushBase

The upcoming Windows 10 update, the Fall Creator’s Update, introduces a new design system, Microsoft Fluent Design System. The design system has 5 major building blocks:

  • Light
  • Depth
  • Motion
  • Material
  • Scale

You can use any of these in a multitude of ways, however Microsoft has made it very easy to use in the latest Preview SDK (16190). Some of the things that used to be relatively hard, or at the least verbose, can now be done with a few lines of code.

Today, I want to show you the XamlCompositionBrushBase (aka XCBB). Before I do, let’s briefly run through a XAML Brush and a Composition Brush.

The XAML Brush

We use Brushes in almost everything we do, they paint the elements in an application’s UI. For example, UIElement.Foreground and UIElement.Background both are of type Brush.

The most commonly used Brush is the SolidColorBrush, an example of setting the Foreground brush:

XAML

<TextBlock ForeGround="Black"/>

C#

myTextBlock = new SolidColorBrush(Colors.Black);

There are other brush types, such as ImageBrush, that are good for specific approaches, that make our live easier because it would otherwise require more work to achieve the same result.

The Composition Brush

A composition brush utilizes the relatively new Composition layer in Windows 10. This layer sits underneath the XAML layer where all your XAML elements live. You can do a lot with the composition layer, animations, effects and more. The mechanism that paints Composition layer elements also uses a “Brush”, however it’s a type of Composition Brush and cannot be used with your XAML directly.

XamlCompositionBrushBase

With the release of the 16190 preview SDK, you can now use the Composition layer and XAML layer together by using the XamlCompositionBrushBase!

This is BIG news because the XCBB allows for interoperability between the Composition layer and the XAML layer and lets you set up composition effect without needing to implement a behavior or other more advanced setups. As an example, let’s create a Brush that applies the Win2D Invert Effect,

Note: I wanted to keep this as simple as possible to focus on XCBB, you can expand on this with more complex Effects, such as the GaussianBlur here.

First, let’s example the XCBB’s two methods that you want to override:

  • OnConnected
  • OnDisconnected

So, here’s our starting point:

public class InvertBrush : XamlCompositionBrushBase
{
    protected override void OnConnected()
    {
        // Set up CompositionBrush
        base.OnConnected();
    }

    protected override void OnDisconnected()
    {
        //Clean up
        base.OnDisconnected();
    }
}

Now, for the awesome part… the XCBB has a CompositionBrush property! All you need to do is instantiate your effect. Here’s the completed Brush code and I’ve broken it down to the important steps:

public class InvertBrush : XamlCompositionBrushBase
{
    protected override void OnConnected()
    {
        // Back out if it's not ready yet
        if (CompositionBrush == null) return;

        // 1 - Get the BackdropBrush
        // NOTE: BackdropBrush is what is behind the current UI element (also useful for Blur effects)
        var backdrop = Window.Current.Compositor.CreateBackdropBrush();

        // 2 - Create your Effect
        // New-up a Win2D InvertEffect and use the BackdropBrush as it's Source
        var invertEffect = new InvertEffect
        {
            Source = new CompositionEffectSourceParameter("backdrop")
        };

        // 3 - Get an EffectFactory
        var effectFactory = Window.Current.Compositor.CreateEffectFactory(invertEffect);

        // 4 - Get a CompositionEffectBrush
        var effectBrush = effectFactory.CreateBrush();

        // and set the backdrop as the original source
        effectBrush.SetSourceParameter("backdrop", backdrop);

        // 5 - Finally, assign your CompositionEffectBrush to the XCBB's CompositionBrush property
        CompositionBrush = effectBrush;

    }

    protected override void OnDisconnected()
    {
        // Clean up
        CompositionBrush?.Dispose();
        CompositionBrush = null;
    }
}

Now that the Brush’s definition is complete, how do we actually use it? This is the most exciting part… you use it like any other Brush in XAML!


<Grid>
    <Grid.Background>
        <brushes:InvertBrush />
    </Grid.Background>
</Grid>

Showtime

Here’s an example implementation. I have an Ellipse with an ImageBrush and it’s currently showing Tuvok (full disclosure: I’m a Trekkie AND a Star Wars fan)

<Ellipse x:Name="ImageEllipse">
    <Ellipse.Fill>
        <ImageBrush ImageSource="{Binding SelectedCharacter.ImagePath}" Stretch="UniformToFill" />
    </Ellipse.Fill>
</Ellipse>

Sketch (2)

If I put another matching Ellipse using my custom InvertBrush on top of the Tuvok Ellipse, here’s the result:


<Ellipse x:Name="ImageEllipse">
    <Ellipse.Fill>
        <ImageBrush ImageSource="{Binding SelectedCharacter.ImagePath}" Stretch="UniformToFill" />
    </Ellipse.Fill>
</Ellipse>
<Ellipse>
    <Ellipse.Fill>
        <strong><brushes:InvertBrush /></strong>
    </Ellipse.Fill>
</Ellipse>

Sketch (3)

Notice how it only inverted what was directly behind the Ellipse and not the page background or the drop shadow?

Level Up

In the case of the InvertEffect, we don’t have any effect variables to update, so there’s no need for a DependencyProperty to set initial values of the effect. However, in most cases, you will need a DependencyProperty in your XCBB to tweak the effect’s values.

To see this, look at the BackdropBlurBrush example here and notice how the Blur.BlurAmount effect property can be updated by using a ScalarParameter when calling CreateEffectFactory.

I hope I was able to clarify how easy it is to get started using the XCBB and how this makes things easier for XAML devs to get the benefits of working with the Composition layer without doing a lot of work.

Happy coding!

Lance

Smart Shades with Windows IoT 1

I wanted to be able to have my office shades go up or down automatically, depending on the amount of light outside. Then I thought, why not take it to the next level with Windows IoT?  I could design a UI with the help of the recently open-sourced Telerik UI for UWP and also leverage a FEZ HAT’s & Raspberry Pi 3 to drive the motors, detect light levels and temperature.

Why would I want to also measure temperature? Because if it were too hot inside, I wouldn’t want to open the shades even if it were light outside.

I’ll go into detail about the software and hardware, but before I do: here’s the GitHub repo with the source code and lets watch a short video of the UI animation and motor in action.

About the Hardware

The hardware you see is a Raspberry Pi 3 (can also be a Pi 2), a FEZ HAT, and a 90 degree motor (like this one). That motor is connected to the FEZ HAT’s Motor A connection, however I also have a second smaller 1:150 motor connected to Motor B (I am still testing out torque of different motors to see which is small and powerful enough to raise a shade).

The FEZ HAT has a great built-in motor driver so that you only need to connect a more powerful external power source to drive the motors. This is the two black wires you see connected to the FEZ HAT on the left.

2017-02-14_1600.png

About the Software

I built the app to be scalable, so there’s a ShadeItemViewModel that contains all the logic to control each shade independently of each other, but still use the same FEZ HAT.

The UI you see is DashboardPage.xaml, it has a GridView at the top with an item for each ShadeItemViewModel. Under the GridView is the light and temperature measurements displayed in a Telerik RadGauge control with LinearGaugeIndicators.

Each item in the GridView has command buttons to open and close the shade, but also a custom control I made that indicates the current position of the shade.

2017-02-14_16-12-43.jpg

 

In the ShadeItemViewModel there are several properties to help tweak the motor speed and for how long to run the motor. I considered using a servo for a while, but I’d need to build a very high ration gear set to turn the degrees of the servo rotations of the shade. It’s easier and more scalable to use time.

This way anyone can change the time and speed of the motor from the settings button to fit their shade’s length. Also, the way the “% complete” algorithm works, it will adapt to the current settings and give proper percentages and it opens or closes.

I’m still experimenting with the motors. I’ll be 3D printing up some housings, and likely gears, to connect the motor to the shade. Once that’s complete, I’ll be writing part 2 of this series and give you all an update with any additional STL files so you can print up the parts for the motors.

Until then, enjoy running/exploring the code and trying it out for yourself!

 

Manipulation is easier than you think

You know that bottom drawer on the Windows 10 navigation app where the upcoming turns are in a list? You know how you can drag it up or down to show more or less of the content? Want to learn how to create it? You’re in luck because that’s what today’s topic is!

Here’s the result of what you’ll be able to do:
navdrawer

 

The Approach

Let’s get started. Since there are no built-in controls that do this, we’ll create a simple layout (links to source code at the end of the article).

There are the three major sections to the layout:

  1. A root container for the main content – This can just be a Grid that fills the page, nothing special here. In my demo I use an Image control with a picture of a map to keep the concept simple.
  2. A handle that the user will drag up and down  – We’ll do that with a thin Grid that contains something so the user knows it can be manipulated (I used a horizontal ellipsis). The responsibility of this Grid is that we need to hook into the ManipulationStarted, ManiuplationDelta and ManipulationEnded events.
  3. A drawer container for the content that will be moved –  This is also just another Grid, but it will be on top of the main content Grid. This Grid should have two Rows, one for the “handle” and one for the ListView that holds the example navigation route steps.

There are two main ways to approach moving the drawer container:

  1. We can translate (move) the entire drawer from off-screen to on-screen
  2. We can increase the height of the drawer container

Since we want the area inside the drawer to become larger or smaller, we need to use option #2. Changing the height comes with a cost, the content inside will be forced to do layout passes when the height is changed.

However, this is in fact what we want because if we did a translate, then a portion of the container would be off the screen and the content can’t be reached (e.g. the ListView wouldn’t be able to show the last item because it will be offscreen).

If you don’t need to cause layout changes and only want to move it off-screen, take a look at the tutorial here where it shows you how to use a TranslateTransform.

The XAML

Okay, lets get started with the XAML. Here’s the page layout. You’ll see that the “HandleGrid” has manipulation events defined


<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
        <Grid x:Name="MainContentGrid">
            <Image Source="Assets/FakeMap.png" VerticalAlignment="Top" />
        </Grid>

        <Grid x:Name="DrawerContentGrid" VerticalAlignment="Bottom" Background="{ThemeResource AppBarBackgroundThemeBrush}" RenderTransformOrigin="0.5,0.5">

            <Grid.RowDefinitions>
                <RowDefinition Height="Auto" />
                <RowDefinition />
            </Grid.RowDefinitions>

            <Grid x:Name="HandleGrid" ManipulationStarted="HandleGrid_OnManipulationStarted" ManipulationDelta="HandleGrid_OnManipulationDelta" ManipulationCompleted="HandleGrid_OnManipulationCompleted" ManipulationMode="TranslateY" Height="15" Background="{ThemeResource AppBarBorderThemeBrush}" BorderThickness="0,1,0,1" BorderBrush="{ThemeResource AppBarToggleButtonCheckedDisabledBackgroundThemeBrush}">
                <SymbolIcon Symbol="More" />
            </Grid>

            <Grid x:Name="DrawerContent" Grid.Row="1">
                <ListView x:Name="RouteSteps" ItemsSource="{Binding RouteSteps}">
                    <ListView.ItemTemplate>
                        <DataTemplate>
                            <Grid>
                                <Grid.ColumnDefinitions>
                                    <ColumnDefinition Width="Auto" />
                                    <ColumnDefinition />
                                </Grid.ColumnDefinitions>

                                <Viewbox Width="48" Height="48">
                                    <Canvas Width="24" Height="24">
                                        <Path Data="{Binding Icon}" Fill="Black" />
                                    </Canvas>
                                </Viewbox>

                                <TextBlock Text="{Binding Summary}" TextWrapping="Wrap" Margin="10,0,0,0" Grid.Column="1" VerticalAlignment="Center" />
                            </Grid>
                        </DataTemplate>
                    </ListView.ItemTemplate>
                </ListView>
            </Grid>
        </Grid>
    </Grid>

 

The C#

Now, let’s take a look at the event handlers for the events. In the ManipulationStarted and ManipulationEnded event handlers I’m only changing the background brush to the accent color (and back). This lets the user know they’re in contact with the handle and that it can be moved.


private void HandleGrid_OnManipulationStarted(object sender, ManipulationStartedRoutedEventArgs e)
{
    var themeBrush = Application.Current.Resources["AppBarToggleButtonBackgroundCheckedPointerOver"] as SolidColorBrush;

    if (themeBrush != null) HandleGrid.Background = themeBrush;
}

private void HandleGrid_OnManipulationCompleted(object sender, ManipulationCompletedRoutedEventArgs e)
{
    var themeBrush = Application.Current.Resources["AppBarBorderThemeBrush"] as SolidColorBrush;

    if (themeBrush != null) HandleGrid.Background = themeBrush;
}

 

The actual manipulation of the drawer’s height happens in the ManipulationDelta handler, we take the current height and add it to the Y delta (the distance the user moved in the Y direction), then set the Grid’s height with that sum.


private void HandleGrid_OnManipulationDelta(object sender, ManipulationDeltaRoutedEventArgs e)
{
    DrawerContentGrid.Height = DrawerContentGrid.ActualHeight + -e.Delta.Translation.Y;
}

 

As I mentioned earlier, changing the height of the container means that the bottom edge will always be visible, thus allowing the user to scroll to the last item in the ListView. Just keep in mind that layout passes can be expensive depending on how much content you have in there.

That’s all there is to it, now go add some gestures to your app (and don’t forget to make them discoverable)!

 

Source Code

Here are the three relevant files to the demo:

Surface Dial and Real-Time Video Effects

I was given a Surface Dial the other day and I thought “what can I do with this to create a better experience for the user”. One thing came right to mind, applying real-time video effects.

I have a UWP app in the Windows Store, Video Diary, where you can apply real-time video effects while recording a video. One of the features of these effects is to increase or decrease the video effect’s properties. For example, the intensity of a Vignette effect, here’s what I want:

2016-11-10_19-39-54

So I whipped out the RadialController API documentation and dug in. It turns out to be extremely simple, here is the result:

 

Let’s take a look at the code.

Note: Going into the specifics of applying real time video effects is out of scope for this article. You can see the source code to this demo app here  to see how it’s done, or  you can see my DynamicBlur Video Effect contribution to the official Win2D Demo app.

Since I didn’t want to go too crazy with the Surface Dial for my first demo, I thought about how the controller can be interacted with; turning the dial and clicking down on the dial. So I thought, why not use the menu to select a video effect and the rotation to change the effect’s intensity. Let’s get started.

First, when the page loads, I need to get a handle to the RadialController:

 

dialController = RadialController.CreateForCurrentView();

 

Next, I want to hook into the event that fires when the dial is turned and set the rotation resolution:

dialController.RotationResolutionInDegrees = 1;
dialController.RotationChanged += DialControllerRotationChanged;

 

Now, I want to make some room before adding my custom menu items, so I grab a handle to the RadialControllerConfiguration and assign it just one default menu item:

var config = RadialControllerConfiguration.GetForCurrentView();
config.SetDefaultMenuItems(new[] { RadialControllerSystemMenuItemKind.Scroll });

 

I need to add some menu items to the circular menu that appears when the dial is clicked. For this I just iterated over the list of effects I added and create a RadialControllerMenuItem for each one and hook into it’s Invoked event:

foreach (var effect in PageViewModel.VideoEffects)
{
    // Create a menu item, using the effect's name and thumbnail
    var menuItem = RadialControllerMenuItem.CreateFromIcon(effect.DisplayName,
 RandomAccessStreamReference.CreateFromUri(new Uri(effect.IconImagePath)));

    // Hook up it's invoked event handler
    menuItem.Invoked += MenuItem_Invoked;

    // Add it to the RadialDial
    dialController.Menu.Items.Add(menuItem);
 }

 

The menu item’s Invoked event handler is fired when an effect is chosen by the user, I get the selected effect by checking what the DisplayName of the menu item was using the RadialControllerMenuItem sender

private async void MenuItem_Invoked(RadialControllerMenuItem sender, object args)
{
    var selectedEffect = PageViewModel.VideoEffects.FirstOrDefault(
        e => e.DisplayName == sender?.DisplayText);

    // apply effect
 }

 

At this point, the effect is applied to the video stream. So we need to switch our focus to the RadialControler’s RotationChanged event handler. This is where I can get the rotation delta (which direction was it turned and by how much) from the RotationDeltaInDegrees property of the RadialControllerRotationChangedEventArgs. 

Since I also have a slider in the UI for the user to change the value (because not every user is going to have a Surface Dial!), I update the slider’s value directly:

 

private void DialControllerRotationChanged(RadialController sender, RadialControllerRotationChangedEventArgs args)
{
    SelectedEffectSlider.Value += args.RotationDeltaInDegrees / 100;
    UpdateEffect();
}

 

Now in the UpdateEffect method, I can use the slider’s new value to apply the effect change:

 

private void UpdateEffect()
{
    // Update effect's values
    PageViewModel.SelectedEffect.PropertyValue = (float) SelectedEffectSlider.Value;
    effectPropertySet[PageViewModel.SelectedEffect.PropertyName] = (float) PageViewModel.SelectedEffect.PropertyValue;
}

 

That’s it! Check out the video above to see the app in action and see the full source code here on GitHub.

 

 

 

 

Build A Custom Win2D RadImageEditor Tool

Telerik has recently ported the RadImageEditor to Windows Universal (8.1 Universal right now, UWP is coming very soon). It is powerful for something that only needs a few lines of code to use the 20 predefined tools.

But what if you wanted something not in those 20? Or what if you didn’t want to have a dependency on the Lumia Imaging SDK (needed for the built-in tools)?

One great features of RadImageEditor is the ability to make a custom tool, tool group or layer. Today I’ll show you how to create a Win2D tool group and add a custom GaussianBlurTool.

Here’s the result:

 

Let’s start with the tool class.

RadImageEditor provides four classes that you can inherit from to make your tool:

  • ImageEditorTool: The most basic tool type.
  • RangeTool: The effect of these tools can vary in the predefined range of values. You get a Slider for user input
  • ImageEditorTransformTool: Allows the user to physically transform the image with gestures.
  • ImageEditorEffectTool: These tools do not support any configuration, they directly apply an effect by selecting the tool.

 

Since Win2D’s GaussianBlur only needs a float value to apply a blur, RangeTool is the best fit. You will need to override a few things:

  • string Name (name of the effect to be shown in the tool group)
  • string Icon (the string path to the icon image file)
  • async Task<WriteableBitmap> ApplyCore (this Task is where you apply your effect)
  • double Min (this is for the Slider shown to the user)
  • double Max (this is for the Slider shown to the user)

ApplyCore is the one that needs a little further explanation.It gets passed two objects:

  • IRandomAccessStream stream (This is the unmodified image from the StorageFile)
  • WriteableBitmap targetBitmap (after applying an effect, copy the pixels into this and return it)

Now that we’re armed with that information, we can get to work on the Win2D blur. Explaining how Win2D works is out-of-scope for this article, but what you need to know is that we can create a CanvasBitmap from the stream, apply an effect to it and copy those pixels and return it.

Here’s the code for the tool (Github gist here)


public class GaussianBlurTool : RangeTool
    {
        public override string Name => "Gaussian Blur";

        public override string Icon => "ms-appx:///CustomTools/ToolIcons/blur.png";

        public override double Min => 0; //Maximum value for the slider

        public override double Max => 20; //Minimum value for the slider

        protected override async Task<WriteableBitmap> ApplyCore(IRandomAccessStream stream, WriteableBitmap targetBitmap)
        {
            try
            {
                stream.Seek(0);

                using (var device = CanvasDevice.GetSharedDevice())
                using (CanvasBitmap cbm = await CanvasBitmap.LoadAsync(device, stream))
                using (CanvasRenderTarget renderer = new CanvasRenderTarget(device, cbm.SizeInPixels.Width, cbm.SizeInPixels.Height, cbm.Dpi))
                using (CanvasDrawingSession ds = renderer.CreateDrawingSession())
                {
                    var blur = new GaussianBlurEffect
                    {
                        BlurAmount = (float) this.Value,
                        Source = cbm
                    };

                    ds.DrawImage(blur);
                    ds.Flush(); //important, this forces the drawing operation to complete

                    await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(CoreDispatcherPriority.High, () =>
                    {
                        //IMPORTANT NOTE:
                        //You need to add using System.Runtime.InteropServices.WindowsRuntime in order to use CopyTo(IBuffer)
                        renderer.GetPixelBytes().CopyTo(targetBitmap.PixelBuffer);
                    });
                }

                return targetBitmap;
            }
            catch (Exception ex)
            {
                Debug.WriteLine($"ApplyCore in GaussianBlurTool Exception: {ex}");
                throw;
            }
        }
    }

The XAML

Now how do we use this? You define the tool group in the RadImageEditor within a custom ToolGroup. I named my tool group “Win2D Effects” and inside that placed the GaussianBlurTool. Note that you can place more than one tool in a tool group, I plan on adding more Win2D tools in there.


<input:RadImageEditor x:Name="MyImageEditor">
    <imageEditor:ImageEditorToolGroup Name="Win2D Effects" Icon="ms-appx:///CustomTools/ToolIcons/Win2DToolGroupIcon.png">
        <customTools:GaussianBlurTool />
    </imageEditor:ImageEditorToolGroup>
</input:RadImageEditor>

 

That’s it! Now go forth and extend the RadImageEditor with some great Win2D goodness and let me know how it goes.