Blog

Network improvements

Network connections are a key feature for use case scenarios that require… connectivity. Nothing too surprising so far, right?

Because of this, being able to easily handle network connection and configurations is something that developers working on such projects value a lot.

Today we would like to present some improvements that have just been added to the .NET nanoFramework network API and features list.

Device Certificate storage

It is now possible to store a device certificate in the device internal storage.

Device certificates can be used to authenticate a device when connecting to Azure IoT Hub, for example. Or, if the device is acting as a web server, to present itself as the valid device to connect to.

Up until now, to use device certificates, one would have to store them in a removable storage or if the certificate was in the application resources, it would require compiling a different application for each device.

To upload a device certificate, you need to go to Device Explorer in Visual Studio, open the Network Configuration dialog and move to the last tab.

We are looking into improving the nanoff CLI to allow automated uploading of these in production scenarios.

Use of A device certificate

The next question might be “how to use this device certificate?”

We have given some thought to this in order to make it as easy as possible, and not disruptive at all.

Because the device certificate is usually a secret, we would not want to expose it unnecessarily. Adding to this and because the storage of the certificate can be in a secure location (depending on the target), it had to be handled at the lowest possible level.

With all these constrains and not wanting to change the well-known APIs like AuthenticateAsClient(…) and AuthenticateAsServer(…) the choice was to add a new property to the SslStream class: UseStoredDeviceCertificate.

When this is set to true, the magic happens at the lowest possible level, deep in the network layer at the firmware. How? When the SslStream is setting up the authentication step, it will try to retrieve the device certificate stored and use it.

Note that “try” is used in the sentence above. The code will really attempt to retrieve the certificate from the device storage and use it. If it is not there, it will continue as if no certificate was provided and whatever the outcome is, it will be handled by the application.

It is worth noting that setting this new property and parsing a certificate in any of the API calls above, will have the effect that the certificate parsed in the parameter will be ignored and the one stored internally will be used. This is valid for both server and client.

Improved information about network availability

Currently, in order to know if there is an active network connection, the options available by our API were limited to subscribing to the NetworkAvailabilityChanged event or checking if there was an IP address assigned to the network interface.

The event is nice to have during the application execution but is not that useful at boot time because, for most devices and network connections, the network connection will become available almost instantaneously and, when it fires the event, the application may not have had time yet to setup the event handler, so it goes unnoticed.

The System.Net.NetworkInformation namespace in full .NET has an API to help with this: NetworkInterface.GetIsNetworkAvailable(). Now we have it too!
Oh yes: to know if network is available at any time, it is a simple matter of calling this method. Convenient, isn’t it?

This call will return true if there is a network connection available on any of the network interfaces in the system. Considering that the vast majority of .NET nanoFramework devices and applications has (or uses) a single network interface, this means that calling it will be all that is needed to know if the device has an active network connection or not!

Easy access to device IP address

Another pain point was the ease of access to the device IP address. With what was available, in order to find out the current IP address of the device, one would have to go the usual (and lengthy) act of: 1) grab the network interfaces collection 2) loop through them to access the IP address 3) extract the IP address. Works fine, but I think we are all in agreement that it could be easier…

So, again, and looking at the full .NET API, there is a method to help with that too. That is IPGlobalProperties.GetUnicastAddresses(). Because we are the nanoFramework and we do not need all the fancy stuff that is there, we came up with a similar call that provides the IP addresses of all the network interfaces in the device: IPGlobalProperties_GetIPAddress(). Which, again, because most devices will have only one, will provide straightforward access to the IP address of the only network interface available.

Performance (and size) improvements

If there is a matter where there are (usually) no disagreements: that is performance. Everyone prefers that the application runs quicker and smoothly.

Now, in the process of adding all these new features, debugging them and all that, several code blocks that where begging for improvement where spotted. This resulted in a review and a complete rewrite of a couple of cases. In the end, a performance improvement on several subsidiary calls (like Parse() and ToString()) was accomplished along with a significant decrease of several hundred bytes in the size of the library code. All this without any penalty in the native code.

mbedTls updated to latest available version

And last, but not the least, mbedTls was updated to the latest available version (v2.26.0). Being a security related component, it is important to keep it updated for the obvious reasons. And here it is: the latest and greatest improvements and fixes are now available in .NET nanoFramework firmware.

Samples have been updated to reflect and benefit from the changes. The network helper class, in particular has been simplified because of these new APIs.

All in all, these are small but important improvements that will make life easier to everyone working with network. Enjoy it!

Unit Test Framework? Yes, we have that too.

Today, I am extremely excite to announce that we have just released the initial version of our Unit Test framework! Yes, Unit Test, like in… Unit Tests! Because it is powered by Visual Studio Test Platform, you’ll find the attributes that you’re used to decorate your test classes. Neat!

I believe you will find this familiar:

Such as this:

And, of course, this one too:

Knowing how important this can be for developers serious about Quality Assurance, this was high on our concern list from the beginning of the project. The initial effort on this, dates to October 2018. Back then, we were stopped in our tracks because of the lack of extensibility support of the VS Test platform. There was only VS2017 and we were told by the good folks in the VSTest team that what we were trying to accomplish would be possible with the coming VS 2019. So, we put this project on halt.

Time is always an issue, and this is no different! Up until now there was no one on the team or on the community that was up to it. A couple of weeks ago that changed: Laurent Ellerbach (who has been a great friend of the project) and has made several relevant code contributions, revisited this, and picked it up from where it was left.

Updating it from the old MS Test framework version was the first step. Then bring in VS 2019 and after that the real fun begun!

As with many other aspects in the .NET ecosystem, from NuGet to the VS Project system, we often bump into hard walls because .NET nanoFramework does not have a Target Framework Moniker, and there’s a limit to what we can “hijack” from the existing tools. Fortunately, enough the VS Test platform extensibility is extremely well designed and it’s usable for us.

That does not necessarily mean that this was a walk in the park! No sir. There was still A LOT of plumbing and workarounds that had to be implemented. But, in the end, we have made it and the Unit Test framework is real and ready to be used!

As usual, simplicity is the key here. To use this feature, one just has to reference the respective NuGet package:

Hit build and go to the Test Explorer window to see the magic happen (and let me assure you that seeing this in action does look like magic)!

The next steps are to add this to all our class libraries so we can improve another notch the project global quality.

Now that we have this freaking awesome tool, next on the queue for the Test framework are integration tests. Yes, you read it well, we will be able to plug into a real target, deploy something from the build pipeline, run unit tests and collect back the results.

We have prepared a sample project to demo this. You can find it are our sample repo here.

Feedback is welcome along with constructive comments and, of course, Pull Requests! (we love Pull Requests).

Automatic firmware updates

Today we are proud to announce that our Visual Studio extension now supports automatic firmware updates! We hope you agree that this is a major, and much needed feature.

What do you have to do to have this working? Just update the Visual Studio extension to the latest version! Once that is done, just plug the board as usual and all will happen auto-magically when you next run a build. No kidding!

All this happens in the background, pretty much like Windows Update. As soon as a board is connected, its firmware version is checked against our database of compatible boards, and if there is a newer version, the update process is started before the program is deployed. No more tedious work of typing at the command line to keep your devices updated.

There are options to enable/disable the feature and to use preview or stable images. These are accessible in the setting dialog on Device Explorer.

The initial version of this new feature is able update the CLR images for STM32 targets. We will be adding support for ESP32 and TI targets soon.

It is worth noting that this feature requires firmware version 1.6.0-preview.54 or higher. If the target is running an older version, you’ll have to do it manually using nanoff one last time.

We hope you find this amazing new feature in nanoFramework Visual Studio extension makes your development experience that much easier.

You can check a video demoing this functionality on our YouTube channel here.

There is also an FAQ about this feature, in case you have questions about it.

Make sure to report any issues and provide your feedback and suggestions for improvements.

In-field update [WIP]

New features and bug fixes are bread and butter in the software industry and embedded systems are no exception. These are made available in releases, which all need to be published and deployed. Quite often, the last step is the most challenging of all given that the capabilities and available resources of the device subjected to the update process decrease, the difficulty increases inversely.

To be fair, it is not usually a big deal when you have the device sitting on your desktop bench, connected to a cable and just need to type into a command line tool to get the new imaged flashed, or (even better!) using a GUI tool.

But what about when you have the device sitting in a hard to reach (remote) location? Or when there are a bunch of them spread geographically apart? Or devices that maybe using a spotty and/or intermittent network connection. Or even not connected at all because there is no network available or it is too expensive. And what happens if the update process hangs? Or if there is a power glitch? Or if the image gets corrupted? There are a thousand ways that can make remote updates on micro-controller scenarios head south…

Nevertheless, being able to remotely update a device deployed in the field is a feature that most likely will be high placed on the “must-have-feature-list” of a company evaluating an embedded system framework.

Currently .NET nanoFramework is missing this feature, but that is about to change!

The specification and design of the in-field update (that’s what we’ll be calling it because the update can reach the target device not only over the air, but though a wired connection or a storage device) is mature and closed, ready to be coded.

Despite being what could be easily classified as a premium feature, it will be (just like everything else in nanoFramework) made available to everyone for free, in full.

Being a high value and commercially relevant feature, it’s more than fair that especially commercial users, contribute monetarily to the development effort. For this purpose, a sponsoring campaign is being launched through GitHub Sponsors program here.

Feel free to jump into the conversation about this topic, ask questions, get updates, and provide feedback. All that will be happening in the #in-field-update channel at the .NET nanoFramework Discord community.

.NET nanoFramework has joined the .NET Foundation!

We have some great news to share: .NET nanoFramework has reached an important milestone by joining the .NET Foundation!

This is kind of a “return home” for the project. Despite being, undoubtedly, connected to .NET because of its roots, the programming language, and the tools it uses, it was not exactly part of the family. Now it is.

The .NET ecosystem has grown a lot. From desktop to cloud, AI, IoT, and smaller devices, it is now virtually everywhere. Even though there are “official” solutions for embedded systems (from Azure Sphere to, more recently, Azure RTOS and friends) those are somewhat “out of ecosystem” in the sense that developers have to keep using the traditional C/C++ in order to code for such devices. If one wants to code in C#, the smallest you can aim to reach out to is a Raspberry PI.
.NET nanoFramework extends .NET’s ability use to really small and constrained devices. Yes, there are a couple of similar offers out there, but none are Open Source. And none of those allows coding for Expressif ESP32; or ST Microelectronics STM32F0/L0/F4 (to name just a few); or Texas Instruments CC3220 and CC1352; or NXP MIMXRT1060 EVK; or any other new target device that can easily be added.

It is our expectation that this will boost the project in many ways. Such has:

  • Visibility. This is one is kind of obvious, right? Considering that this is one of the goals of the .NET Foundation, more developers will become aware of the project, will start to use it and, hopefully, in the good spirit of Open Source, will give back to the project, ultimately allowing us to move faster and increase the overall quality.
  • Easier access to the .NET teams that we depend most upon, or that have impact on the project. These are Visual Studio, VS extensibility, .NET project system and such. Every now and then we bump into bugs/changes that break stuff and being able to report and discuss things quicker and easily is invaluable. Being able to pass along our suggestions and “needs” can only be positive for the continuous grow and overall quality of the project.
  • Tighter integration with the .NET ecosystem. As it is now, .NET nanoFramework is perfectly usable and integrated in the key aspects and tools including Visual Studio, libraries available through NuGet etc. But we, admittedly, have some pain points that could easily go away providing that such integration can happen. Starting with the Project System, having a dedicated target framework identifier would make our life so much easier. Extend Unit Testing framework so that we can have it on our projects and finally unifying APIs. We have already started walking down this path with conversations with the .NET Core IoT team, so the first visible changes are due shortly.

A thank you note is due to all the people that paved the way and made this possible. When .NET Micro Framework was launched a long time ago and showed that writing C# code for microcontrollers was possible. And, of course, open sourced it! .NET nanoFramework sure is a lot different from what was .NETMF back then, but we are standing on the shoulders of giants and we owe a lot to those very smart and talented people.

All in all, it is a positive event that can only bring good things for the project. We are very happy that it is happening, and it sure feels like a recognition for all the immense effort that a handful of people have poured into .NET nanoFramework for the last 4 years.

A lot of great achievements are ahead of us for sure. Let’s make that happen!

PS: We hate to see wasted talent and positive energy! Join our Discord community. Fork our samples repo. Grab open issues. Fix bugs. Contribute with new features. Write a project for Hackster.io. Have fun with .NET nanoFramework!

Changing licensing to MIT

As some of you may have noticed: we have changed the licensing terms of some of our repositories.

Until yesterday, there were two licenses across our repos: Apache 2.0 and MIT. The oldest ones were under Apache 2.0 and newest ones under MIT.

Being both the “permissive ones” and, in practice, without much difference between them we thought on making it simpler for everyone (both hobbyists/personal users and companies/commercial users) and unify them.

We have decided to go with the MIT one because of its simplicity and openness, which is in the spirit of what has been done since the .NET nanoFramework project started.

No big deal, most will say, but we hope that this is another step on consolidating the project’s organization and openness.

Have fun!

Improvements on build system

The build system for all (preview) target images have just been updated to reduce complexity and aid interoperability. These changes are particularly relevant for those interested in local builds and debugging of the interpreter.

Until this change, the following applied:

  1. The CD-CI pipeline (based on Azure Pipelines) was self-contained and completely autonomous.
  2. Local builds relied on developers following a recipe to download and install all the required tools. ESP32 developers where luckier than the rest as there were some PowerShell scripts that downloaded and installed some of the required tools.

The above lead to situations where developers struggled to follow the setup and configuration instructions, and became frustrated in their attempts to match between the scripts used by Azure Pipelines and the automated install scripts. Not to mention the build instructions/guides in the documentation continually falling behind.

Considering that we want to keep using the powerful Azure Pipelines (which is able to consume PowerShell scripts), the path seemed quite obvious. We should have a common base of PowerShell scripts, that are consumed by either Azure Pipelines or locally by the developer if needed.

Of course, there will still be variations on all this, mainly because the Azure Pipelines build agent already includes some pre-installed tools and the locations to download are different between the agent and a local machine where a developer has full control of where the tools should be installed. All of this can be factored into the PowerShell scripts and the various Pipelines yaml documents.

So what has changed?

Some wrapper scripts have been added to deal with differences in the required tools for the various target platforms.

New scripts were added to install ALL the required tools. This is now real (except for Visual Studio Code).

Also, a new script was added to wrap the calls to CMake providing a true “fire-and-forget” experience for those who prefer to build locally without having to worry about anything at all.

On top of this, bringing the build system a couple of notches up, there are now launch and build configurations per target living in each target folder. To make this really useful and developer friendly, a new script is now available to fully setup VS Code. This script sets all of the required entries in settings.json, composes the launch configuration and CMake variants by grabbing the individual configurations from the various targets. And I really mean ALL because the tool paths are also updated with the setups/install from the previous step.

The build guides have been updated to reflect and describe all this, But I will summarize it here to give a concise overview.

For example, to install all the required tools to build an ESP32 image in a specific location:

.\install-nf-tools.ps1 -TargetSeries ESP32 -Path 'Z:\my-nftools'

To build an image for the STM32 F429 Discovery board:

.\build.ps1 -Target ST_STM32F429I_DISCOVERY

To setup Visual Studio Code:

.\Initialize-VSCode.ps1

Hopefully this is an important step forward in what concerns the setup stage. So, there are no more excuses to not setup a local build system and even debug it.

What are you waiting for? Go get it and have fun with .NET nanoFramework! 😉

Stable releases are out!

Today we have completed the publishing of our latest stable releases. This includes all of the firmware images and class libraries.

This happens after deep rework on some key components, like the metadata processor (which is responsible for processing the .NET IL and make usable by the .NET nanoFramework CLR & execution engine) and the Visual Studio extension.

The Visual Studio extension is now much more stable and all of those nasty crashes and hangs when interfacing with a device in Visual Studio are now gone.

The firmware updater tool (nanoff) is now able to update STM32 targets with both JTAG and DFU connections, along with the new TI XDC targets.

Too many features were added and bug fixes made to list them all, but the full list can be found in the check-in history on out various repos. The following lists some of the highlights:

  • ESP32 RMT and Gpio​Change​Counter,
  • New platforms and targets like the TI CC1352 and NXP MIMXRT1060_EVK,
  • Several fixes and improvements deep on the execution engine and CLR,
  • ESP32 moved to IDF 3.3.1,
  • TI CC3220 SimpleLink move to 4.10,
  • Output of target boot and assembly details are shown at Visual Studio output window,
  • Start of debug sessions are now smoother on all targets and platforms,
  • A completely new Json C# library with improved and new features.

It is worth mentioning that since the last stable release, some of the community involvement has been excellent. There were a few outstanding collaborations that has made it possible for some of this releases features and improvements to happen. A big thank you to those developers!

For the next iteration, the plan is ambitious:

  • Improving nanoff to allow updates over USB,
  • Release a C# unit test framework,
  • Improve general QA by adding unit tests to native and managed code,
  • Add IFU capability to all platforms.

And who knows what other goodies we can pick up along the journey…

The .NET nanoFramework project is growing and becoming more feature rich. The stability and overall quality is increasing too. All this requires a lot of effort and time from the maintainers. We sure could use more people coding, reviewing stuff, writing documentation and creating walk-through guides along with answering questions from developers and help with the daily chores with Azure Pipelines, GitHub, CD/CI.

Have fun with .NET nanoFramework!

Custom attributes with constructor

Yes, you read that correctly!

nanoFramework just got support for these. You can now have custom attributes with a constructor and access the value on the constructor.

Let’s see some code to illustrate this.

Consider these two classes defining attributes.

public class AuthorAttribute : Attribute
{
    private readonly string _author;
    public string Author => _author;
    public AuthorAttribute(string author)
    {
        _author = author;
    }
}
public class MaxAttribute : Attribute
{
    private readonly uint _max;
    public uint Max  => _max;
    public MaxAttribute(uint m)
    {
        _max = m;
    }
}

Now a class using the above attributes on a given field.

public class MyClass1
{
    [Attribute1]
    [Attribute3]
    public void MyMethod1(int i)
    {
        return;
    }
    [Ignore("I'm ignoring you!")]
    public void MyMethodToIgnore()
    {
    }
    private readonly int _myField;
    [Ignore("I'm ignoring you!")]
    public int MyField => _myField;
    [Max(0xDEADBEEF)]
    [Author("William Shakespeare")]
    public int MyPackedField;
}

And finally reaching those attributes.

var myClass = new MyClass1();

var myFieldAttributes = myClass.GetType().GetField("MyPackedField").GetCustomAttributes(true);
Debug.WriteLine($"\nThe custom attributes of field 'MyPackedField' are:");

MaxAttribute attMax = (MaxAttribute)myFieldAttributes[0];
Debug.WriteLine($"MaxAttribute value is: 0x{attMax.Max.ToString("X8")}");

AuthorAttribute attAuthor = (AuthorAttribute)myFieldAttributes[1];
Debug.WriteLine($"AuthorAttribute value is: '{attAuthor.Author}'");

The above code will output something like this:

The custom attributes of field 'MyPackedField' are:
MaxAttribute value is: 0xDEADBEEF
AuthorAttribute value is: 'William Shakespeare'

Looks pretty simple and trivial doesn’t it? And can be very useful too!
Get the full project on our samples repo here.

And what are you waiting for? Go write some code and have fun with nanoFramework! 🙂

PS: make sure you join our lively Discord community and follow us on Twitter.