Improvements on build system

The build system for all (preview) target images have just been updated to reduce complexity and aid interoperability. These changes are particularly relevant for those interested in local builds and debugging of the interpreter.

Until this change, the following applied:

  1. The CD-CI pipeline (based on Azure Pipelines) was self-contained and completely autonomous.
  2. Local builds relied on developers following a recipe to download and install all the required tools. ESP32 developers where luckier than the rest as there were some PowerShell scripts that downloaded and installed some of the required tools.

The above lead to situations where developers struggled to follow the setup and configuration instructions, and became frustrated in their attempts to match between the scripts used by Azure Pipelines and the automated install scripts. Not to mention the build instructions/guides in the documentation continually falling behind.

Considering that we want to keep using the powerful Azure Pipelines (which is able to consume PowerShell scripts), the path seemed quite obvious. We should have a common base of PowerShell scripts, that are consumed by either Azure Pipelines or locally by the developer if needed.

Of course, there will still be variations on all this, mainly because the Azure Pipelines build agent already includes some pre-installed tools and the locations to download are different between the agent and a local machine where a developer has full control of where the tools should be installed. All of this can be factored into the PowerShell scripts and the various Pipelines yaml documents.

So what has changed?

Some wrapper scripts have been added to deal with differences in the required tools for the various target platforms.

New scripts were added to install ALL the required tools. This is now real (except for Visual Studio Code).

Also, a new script was added to wrap the calls to CMake providing a true “fire-and-forget” experience for those who prefer to build locally without having to worry about anything at all.

On top of this, bringing the build system a couple of notches up, there are now launch and build configurations per target living in each target folder. To make this really useful and developer friendly, a new script is now also available to fully setup VS Code. This script sets all of the required entries in settings.json, composes the launch configuration and CMake variants by grabbing the individual configurations from the various targets. And I really mean ALL because the tool paths are also updated with the setups/install from the previous step.

The build guides have been updated to reflect and describe all this, But I will summarize it here to give a concise overview.

For example, to install all the required tools to build an ESP32 image in a specific location:

.\install-nf-tools.ps1 -TargetSeries ESP32 -Path 'Z:\my-nftools'

To build an image for the STM32 F429 Discovery board:

.\build.ps1 -Target ST_STM32F429I_DISCOVERY

To setup Visual Studio Code:

.\Initialize-VSCode.ps1

Hopefully this is an important step forward in what concerns the setup stage. So, there are no more excuses to not setup a local build system and even debug it.

What are you waiting for? Go get it and have fun with .NET nanoFramework! 😉

nanoFirmawareFlasher is here!

The nanoFramework toolbox just got a new addition: nanoFramework Firmware Flasher.

This new tool is a CLI provided as a .NET Core Global Tool.

And why is this so special you may ask!

A .NET Core Global Tool is a special NuGet package that contains a console application. It gets installed on the developer’s machine in the default location and is made globally available in the command prompt.

The install is made with a simple call to the dotnet CLI and updates are provided through NuGet (just like any other NuGet package we are used to).

Check out how easy it is to install:

dotnet tool install -g nanoFirmwareFlasher

How cool is that! 😊

It supports targets for both ESP32 and STM32 platforms (the remaining ones will be added shortly).

Now, let’s drill down into the tool functionalities.

With it, one can update the firmware image right from the official Bintray repositories. This is valid for all development, stable and community targets. The fw version can be specified, or the latest available package can be used. Just like this:

nanoff --update --platform esp32 --serialport COM31

Along with the firmware package that includes both the nanoCLR and nanoBooter (except for ESP32 targets that have their own bootloader) it can also include a deployment package (C# program). This means that the full programming of a device can be easily be accomplished for example in a production environment. Because this is a CLI tool, the full process can be completely automated!

It also allows the backing up the deployment area for ESP32 targets (support for other platforms will be added).

Restoring a deployment image is also possible for all targets.

Specifically for STM32 targets, it works with both JTAG and DFU connected devices. One can even list the connected devices on any of these options. Again, this can be handy for production environments.

As usual let us know if you find any glitches and what features you’re missing.

If you haven’t done so, make sure you join our Discord community and follow us on Twitter.

Have fun with nanoFramework!

PS: we are “hiring”! If you like to develop for embedded systems join our team.

NuGet, assembly and native versions

nanoFramework C# class libraries are distributed as NuGet packages to be consumed by projects. This has been like this since day one. NuGet packages are practical, easy to distribute, easy to consume, easy to update.

But they had a …minor… problem. Actually, make that two… 😉

One was that the dependency between the managed assembly version and the corresponding version of the native end. The initial approach on this (inherited from .NETMF) was that the native version numbering had to follow the managed version. The main reason being that, at deployment time, there is the need to check if the device has support for what is about to be deployed to it. Nothing against this check that is perfectly reasonable. But – and this is a big but – this check is too strict. A detailed explanation on this has already been provided in this other blog post. Basically, what happened was this: even if the native code hadn’t any change whatsoever, it was forced to “change” just because the version had to bump in order to follow the managed one. This led to a consequence that is kind of severe in our context: forces to update the firmware on the target devices, which is something to be avoided at all costs!

This was improved a few weeks ago with the addition of the AssemblyNativeVersion attribute to our class libraries. This attribute is there to decouple the version numbers in the managed assembly and the native implementation. With it, the check on the required version on the target device at deployment time is possible and, by decoupling it from the managed version, made it possible to keep improving and changing the manged assembly as needed without forcing the native one to follow. Only when there is a real need to change the native part it’s version will have to be bumped

The second problem was that, because of that strict versioning dependency, whenever the version of an assembly changed all the dependent ones had to follow. The consequence is that this required massive and cascading updates that are time consuming and force everyone and everything to follow upon. If the version of mscorlib changed, guess what? All other class libs had to change along just to follow it!!

This too has been improved. Because of all the above, there is no need for this strict dependency anymore. The native versions have been bumped to a high number to make it clear to developers that the versions do not need to be matched. So, please, no more attempts to match these, OK?

Along with this, the NuGets have been updated too and the dependencies are not strict versions anymore. Check bellow what is showing in Visual Studio package manager for a NuGet package before and after.

Another improvement in the NuGets is that the required native version is now showing on the NuGet description. Making this visible removes the obscurity behind it and this can be checked visually and programmatically.

This native version is what the target device has to have support for. The native version available can be checked in the Device Explorer in Visual Studio. Bellow you can see the device capabilities of a target that has support for the above NuGet.

Every class library is now free from this slave dependency and, from now on, will have its version bumped only when there is a real need to.

The outcome of all this is that now we can start dealing with nanoFramework NuGets like we usually do with any other NuGet package: consume as needed and update when needed, not because we are forced to.

Trust all this will make everyone’s life easier and remove a lot of the friction that was around nanoFramework NuGets.

Have fun! 🙂

SonarCloud is on nanoFramework repos!

Code quality is something that is high in our priority list at nanoFramework.

Though, being on the list is not of much use, it must be measurable and comparable against an accepted standard.

For this reason, since last week we’ve been busy adding the awesome SonarCloud tool to all our class libraries repositories on GitHub.

SonarCloud is an analysis tool that checks code for issues and defects that aren’t caught by the compiler but are problems none the less. These can be for many reasons, like code that hides an unintended behaviour, a potential security breach, a violation of the .NET principles and good practices, unit test coverage and many others. These are grouped into bugs, vulnerabilities, code smells, coverage and duplications. A global classification of the project is then computed into a score.

The SonarCloud folks are supporting Open Source projects by offering their tool free of charge. Nice!

It’s setup is straightforward and takes only a few minutes per project. SonarCloud provides an Azure Pipelines task which is great for nanoFramework and it allows us to analyse PRs as they are submitted. The same goes for commits that show the various scores and metrics right after being built.

This has already pointed a few code smells and bugs that were luring in our code.

All in all, we are in very good shape! Just check it out! Most of our repos have their quality gate showing a green “pass” and score “A” in most of the categories. We do have some Cs and Ds, mostly because of code complexity, which is something that can be improved, but nothing serious.

One area that we are not performing that well in (and to be completely honest, are not performing at all) is code coverage. Our class libraries are still lacking unit tests. Mostly because we still haven’t worked out a sufficient way of running tests and collecting results when they require deployment to real hardware, But we’ll get there (feel free to contribute)!

 

WE ARE HIRING! 😉 make sure you join our Discord community

Deployment image generator

nanoFramework Visual Studio extension (both VS2017 and VS2019 versions) just got a new improvement: the ability to generate a “deployment image”. And what the heck is a “deployment image” you ask? Let me explain with a bit more detail.

To run a C# application a nanoDevice must have on its storage (which is a flash memory on all current target platforms) the following bits:

  • The nanoBooter responsible to perform the cold boot of the CPU and provide a mean to perform updates and a safe place when the CLR is messed up for some reason. This is valid unless the platform has to use it’s own, which is the case with ESP32 and the recently added TI CC32200SF.
  • The nanoCLR which comprises the IL execution engine, the CLR and the native parts of all the class libraries that the device has support for.
  • The configuration block. Optional and only required for devices that are network enabled to store network related stuff. Specifically, network configurations, certificates and Wi-Fi profiles.
  • The PEs (portable executable) of the managed application and the assemblies it references, like class libraries or others.

All those are made available in different occasions and flashed using different tools.

  • nanoBooter and nanoCLR are made available after compiling the nf-interpreter or by downloading them from our Bintray repository. These are flashed using the appropriate tool for the platform. This would be the nanoFramework flasher tool for ESP32, ST-LINK Utility or DfuSe for STM32 and Uniflash for TI.
  • The configuration block is written by the C# application itself or by using the Network Configuration dialog box available in the Visual Studio extension.
  • The PEs of the managed application (and referenced assemblies) are available upon a successful build of a Solution in Visual Studio. Those are deployed to the nanoDevice when hitting the “Deploy” option for the C# Project or when starting a debug session on Visual Studio.

If one is using nanoFramework to deploy stuff on a couple of boards the above is quite all right and works perfectly. Now, if you need to commission several boards on a production run or for a field test using a few dozen boards, things start to get complicated and time consuming. You’ll need to, at least, use a couple of different tools and possibly connect and disconnect each board a couple of times.

There is already a very handy feature which is the generation of a binary file with the bundle of all the PEs (application and referenced assemblies) that can be used to flash the “application” on a target that has already been loaded with the nanoBooter & nanoCLR. Yes, that’s the .bin file with the assembly name that you can find on the project output folder after a successful build.

nf-app-binary

We the recent improvement this was taken to the next level: generating a complete deployment image. Yes, you read it correctly, COMPLETE. With all the above: nanoBooter, nanoCLR, PEs and configuration block.

The nanoBooter, nanoCLR and configuration block are dumped from the connected target flash and cached (because that’s a lengthy operation). When the deploy operation occurs (and the PEs are generated and made available) the Visual Studio extension cooks everything together and spits a nice binary file with the full package.

nf-deployment-image-bin.png

This is working right now for STM32 targets only. Support for other platforms where this programming approach is suitable will follow.

The binary file can be flashed into the device using ST-LINK Utility or, if you’re working with an mbed enabled ST board (which all recent DISCOVERY and NUCLEO boards are), you can just drop it at the disk drive that shows on Windows Explorer and that’s burned in the flash in a couple of seconds. How cool is this?

Enough technicalities let’s see this working! 🙂

Enable the deployment image generation in the (new) settings dialog available in Device Explorer.

nf-enable-deployment-image-generator.png

You can optionally include the configuration block coming from the “seed” device. If you don’t, the region corresponding to the configuration block will be flashed with an empty content.

Just bellow this, you can find the configurations about the flash dump files cache. The options are: caching to the project output folder (which is the default) or in a location that you can specify.

All the above settings are persisted in you Visual Studio user account.

When you hit deploy or start a debug session, the magic happens, and the image is generated.

Something worth noting: the flash contents need to be dumped when they are not available on cache and this can take a few seconds before it completes. So be patient and know that this will happen only once for each device.
If you have a lot of devices and work on several projects at the same time, it’s probably wiser to use a central cache…

Now that we have the deployment image, let’s check it out.

Fire up ST-LINK Utility and connect the board. I’m using a STM32F769I-DISCO for this example.

nf-st-link-utility-connected

The board is already flashed with nanoBooter and nanoCLR and a debug session has been started previously with the Blinky application (meaning that the required PEs are there too). So, it has everything needed to blink the LED. Which it’s doing.

[wpvideo JCaz3qT8]

Next let’s load the file with the deployment image binary.

nf-st-link-utility-open-file

ST-LINK Utility has a nice comparing tool that we’ll use to compare the contents of the deployment image binary and what’s in the device flash.

nf-stlink-compare.png

We need to compare all the flash content so we’ll enter the flash start address next, which is 0x08000000 for this device.

nf-stlink-compare-start-address.png

The file contents are loaded next and the compare operation starts. I’m guessing that ST-LINK is very thorough on this because the operation takes a lot of time to complete.

nf-stlink-compare-progress.png

And voilà, if there were any doubts, the machine outputs it’s conclusions and confirms that both contents are identical!

nf-stlink-compare-success

Last test with this: use the disk drive flashing method.

Start by wiping the flash content using the “Full Chip Erase” command in ST-LINK Utility toolbar.

nf-stlink-full-chip-erase

After this you can disconnect the device.

Open Windows Explorer and locate the drive.

nf-mbed-disk-drive.png

Now open the folder where the deployment image file is located (that should be the project output directory) and simply drag and drop it on the drive representing the connected target. The upload will take a few seconds. After that operation is completed the MCU is reset, nanoBooter loads nanoCLR and finally the managed application is executed.

[youtube https://www.youtube.com/watch?v=jPheVSzfxu0&w=560&h=315]

This can be useful in many situations, like in production runs to load a test application, testing a bunch of boards, keeping a snapshot of a deployment, etc.

If you have an idea on how any of this can be improved, let us know, just send us a Tweet or join our Discord community. 😉

Have fun with nanoFramework!

To deploy, or not to deploy, that’s the question…

nanoFramework class libraries are composed of a managed part (written in C#) and the respective counterpart (written in C/C++) that is part of the firmware image that runs on the target.

As part of the usual development cycle there are improvements, bug fixes and changes. Some of those touch only the managed part, others only the native and others both. If the interface between them is not in sync: bad things happen.

In an effort to improve code quality and make developer’s life easier, two mechanisms have been implemented along the way. One was to have version numbers on both parts (that have to match) and the another was an improvement introduced on the deployment workflow that checks if the versions of the assemblies being deployed match the ones available in the target device. Pretty simple and straightforward, right?

Despite the simplicity of the mechanism, it has some draw backs. Occasionally it gets confusing because of apparent version mismatches caused by the fact that the develop branch is being constantly updated and the versions are bumped. Also, because the preview versions of the NuGet packages are always available in nanoFramework MyGet feed and not always on NuGet.

The other aspect that is not that positive is the need to have the versions in sync, which means that whenever the class library assembly version changes the native version must be bumped too. And that requires flashing the target device again. Most of the time without any real reason except that a version number was changed.

This is changing for better!

As of today, the native version number is independent of its managed counterpart. Meaning that it will change when it makes sense and not just because it has to mirror the managed version. Some of you are probably thinking right now that this will be a step back and has the potential to cause trouble with obscure version mismatches and deployment hell. This won’t happen because of something else that we are introducing on all class libraries (and base class), an Egg of Columbus, actually. There is new assembly attribute on all of those that declares which native version is required. Simple but effective.

This is will be used from now on by the deployment provider to validate if the target has all the required assemblies and versions to safely deploy the application.

Because these introduce several breaking changes, make sure you update the VS extension (2017 and 2019 versions available) and the NuGet packages on the Solutions you’re working for a smooth transition.

Happy coding with nanoFramework!

All systems green! (again)

After the release of v1.0 we turned a page and that is true on what concerns our GitHub repositories history. Release means tagging a point in the repository commit history. And suddenly after that all hell breaks loose on our versioning system!

Continuous deployment and continuous delivery are great, but we must make sure that all that actually works… Specially all by itself and not requiring constant baby-sitting and tweaking. This was not happening with this…

To give you an insight: we are using AppVeyor as the build platform (a big thanks to them for making it available for Open Source project such as nanoFramework) and (until last week) on GitVersion to figure out the next version that will be used when running a build. Out of nothing GitVersion went nuts and suddenly wasn’t able to compute the next version on some PRs and specially on git tags! After an insane number of hours spent trying to reach to the root cause of it (including AppVeyor support, asking for help on the project page – help that never came – and looking at other projects using it) it was decided that the best course of action was to switch to a better alternative. Switching tools carries a degree of uncertainty and the need to change stuff that was working, so it’s not something to be taken lightly… 😕

Surprisingly a quick search didn’t reveal much options. At least that looked credible and reliable. One option popped tough: Nerdbank.GitVersioning. Digging a bit deeper it seems that this is the tool that Microsoft is using lately on some of their major repositories. MSBuild included. So, if it’s good enough for taking care of MSBuild it should be suited to our modest requirements… 😎

Since you ask, between nf-interpreter, debugger and class libraries there are now 27 repositories. That’s a lot to maintain!! 😯

Moving forward and up to climb the learning curve that comes along with each new tool!

After making the required adjustments and tweaking on Yaml, PowerShell, config files and VS projects we (finally) have all system green again.

Now, back to coding which is a lot more fun!! 😉

Obfuscation? We have an app for that!

If you are in the .NET world for long enough, you’ve probably have come across with the term obfuscation at some point.

In a nutshell and paraphrasing Garry Trinder: “is the process of scrambling the symbols, code, and data of a program to prevent reverse engineering.”

As most of us are aware there are several tools available to decompile a .NET assembly and expose the code inside. nanoFramework assemblies, being .NET, are no exception.

Why is this a concern? Well, if you are a company and own IP that you don’t what to share publicly (for example a super-secret algorithm that took you years to fine tune) this could be of interest to you.

The main purpose of an obfuscator is precisely… obfuscating… the code so that it makes unreadable or at least very hard to understand. There are several techniques on how to accomplish this and some are more effective than others.
Just to name a few: symbols renaming, overloaded renaming, cross assembly renaming, inline value and array encryption, control flow obfuscation, resource encryption. But that’s enough as these details are beyond the subject of this blog post. Check the pictures bellow so you can have an idea on how this looks.

Before After
obfuscation_before obfuscation_after

We’ve worked together with the good folks of babelfor.NET to add support for nanoFramework on their .NET obfuscation product: Babel Obfuscator. Early this week v9.0 of this tool was released being the first (and only!) obfuscation tool supporting nanoFramework.

So, if you have a commercial interest in nanoFramework and sharing your IP was preventing you from moving forward, you don’t have an excuse any more!