Unit Test Framework? Yes, we have that too.

Today, I am extremely excite to announce that we have just released the initial version of our Unit Test framework! Yes, Unit Test, like in… Unit Tests! Because it is powered by Visual Studio Test Platform, you’ll find the attributes that you’re used to decorate your test classes. Neat!

I believe you will find this familiar:

Such as this:

And, of course, this one too:

Knowing how important this can be for developers serious about Quality Assurance, this was high on our concern list from the beginning of the project. The initial effort on this, dates to October 2018. Back then, we were stopped in our tracks because of the lack of extensibility support of the VS Test platform. There was only VS2017 and we were told by the good folks in the VSTest team that what we were trying to accomplish would be possible with the coming VS 2019. So, we put this project on halt.

Time is always an issue, and this is no different! Up until now there was no one on the team or on the community that was up to it. A couple of weeks ago that changed: Laurent Ellerbach (who has been a great friend of the project) and has made several relevant code contributions, revisited this, and picked it up from where it was left.

Updating it from the old MS Test framework version was the first step. Then bring in VS 2019 and after that the real fun begun!

As with many other aspects in the .NET ecosystem, from NuGet to the VS Project system, we often bump into hard walls because .NET nanoFramework does not have a Target Framework Moniker, and there’s a limit to what we can “hijack” from the existing tools. Fortunately, enough the VS Test platform extensibility is extremely well designed and it’s usable for us.

That does not necessarily mean that this was a walk in the park! No sir. There was still A LOT of plumbing and workarounds that had to be implemented. But, in the end, we have made it and the Unit Test framework is real and ready to be used!

As usual, simplicity is the key here. To use this feature, one just has to reference the respective NuGet package:

Hit build and go to the Test Explorer window to see the magic happen (and let me assure you that seeing this in action does look like magic)!

The next steps are to add this to all our class libraries so we can improve another notch the project global quality.

Now that we have this freaking awesome tool, next on the queue for the Test framework are integration tests. Yes, you read it well, we will be able to plug into a real target, deploy something from the build pipeline, run unit tests and collect back the results.

We have prepared a sample project to demo this. You can find it are our sample repo here.

Feedback is welcome along with constructive comments and, of course, Pull Requests! (we love Pull Requests).

New preview feeds for nanoFramework!

Until now nanoFramework has been using MyGet as its NuGet feed provider to host preview and test packages.

MyGet is a decent provider but only offers 100MB storage on their free trier. Actually they offer 2GB to OSS projects (which is great) and for which we’ve applied, but it turns out that setting up the account for the OSS offer is an absolute pain! After more than two weeks and dozens of emails exchanged with their support we’ve decided to just move.

We already run most of our infrastructure on Azure DevOps and support for public feeds in Azure Artifacts were recently announced at Microsoft Build 2019. This has support for NuGet packages and the offer includes 2GB of storage. Considering all this, we’ve decided to move everything there.

How does this impact you as a C# developer consuming NuGet packages? Not much! It’s only a matter of editing the NuGet package source in visual studio to point to the new feed and you’re done.

The URL for the new feed is:

https://pkgs.dev.azure.com/nanoframework/feed/_packaging/sandbox/nuget/v3/index.json

This impacts the VS extension preview versions too. We are now publishing those on the Open Vsix Gallery. Likewise, if you are eager to get the latest and greatest, and you’re already subscribed to the preview feed, you have to update the URL to:

http://vsixgallery.com/feed/author/nanoframework/

For a detailed description on how to setup all this in Visual Studio, please read the earlier blog post here.

Have fun with nanoFramework!

NuGet, assembly and native versions

nanoFramework C# class libraries are distributed as NuGet packages to be consumed by projects. This has been like this since day one. NuGet packages are practical, easy to distribute, easy to consume, easy to update.

But they had a …minor… problem. Actually, make that two… 😉

One was that the dependency between the managed assembly version and the corresponding version of the native end. The initial approach on this (inherited from .NETMF) was that the native version numbering had to follow the managed version. The main reason being that, at deployment time, there is the need to check if the device has support for what is about to be deployed to it. Nothing against this check that is perfectly reasonable. But – and this is a big but – this check is too strict. A detailed explanation on this has already been provided in this other blog post. Basically, what happened was this: even if the native code hadn’t any change whatsoever, it was forced to “change” just because the version had to bump in order to follow the managed one. This led to a consequence that is kind of severe in our context: forces to update the firmware on the target devices, which is something to be avoided at all costs!

This was improved a few weeks ago with the addition of the AssemblyNativeVersion attribute to our class libraries. This attribute is there to decouple the version numbers in the managed assembly and the native implementation. With it, the check on the required version on the target device at deployment time is possible and, by decoupling it from the managed version, made it possible to keep improving and changing the manged assembly as needed without forcing the native one to follow. Only when there is a real need to change the native part it’s version will have to be bumped

The second problem was that, because of that strict versioning dependency, whenever the version of an assembly changed all the dependent ones had to follow. The consequence is that this required massive and cascading updates that are time consuming and force everyone and everything to follow upon. If the version of mscorlib changed, guess what? All other class libs had to change along just to follow it!!

This too has been improved. Because of all the above, there is no need for this strict dependency anymore. The native versions have been bumped to a high number to make it clear to developers that the versions do not need to be matched. So, please, no more attempts to match these, OK?

Along with this, the NuGets have been updated too and the dependencies are not strict versions anymore. Check bellow what is showing in Visual Studio package manager for a NuGet package before and after.

Another improvement in the NuGets is that the required native version is now showing on the NuGet description. Making this visible removes the obscurity behind it and this can be checked visually and programmatically.

This native version is what the target device has to have support for. The native version available can be checked in the Device Explorer in Visual Studio. Bellow you can see the device capabilities of a target that has support for the above NuGet.

Every class library is now free from this slave dependency and, from now on, will have its version bumped only when there is a real need to.

The outcome of all this is that now we can start dealing with nanoFramework NuGets like we usually do with any other NuGet package: consume as needed and update when needed, not because we are forced to.

Trust all this will make everyone’s life easier and remove a lot of the friction that was around nanoFramework NuGets.

Have fun! 🙂

High Level Programming Languages for Embedded Projects

Industry expert Mark Harris just published an article on its Altium blog about “High Level Programming Languages for Embedded Projects“. The article includes a thorough analysis on the topic on which .NET nanoFramework plays a key role as an excellent framework for embedded projects.

Follows a copy of the original article published in June 2019, for your convenience.

«In today’s vastly growing market, employees of small businesses, small divisions of larger companies, and especially freelance engineers are being asked to fill more roles than ever as part of their job responsibilities. Often, an electronics engineer will not only design the schematic and circuit board, but also may be required to develop embedded firmware for it, and build computer, web, or mobile software. On the other hand, there are web/mobile/desktop developers who branch out into designing and programming hardware because they are passionate about electronics.

With the additional responsibilities, there can be a substantial shortage of time needed to complete projects punctually. With low volume or prototype projects, this might be two-fold to keep the total project costs down. C and C++ are fantastic programming languages, but if they’re not your primary skill/role, they aren’t necessarily the fastest languages to use for developing firmware. Embedded development can have a significantly steeper learning curve than desktop, web or mobile software development, especially when interacting with hardware peripherals and setting registers. For complex projects, C/C++ can get intricate quickly, making debugging and maintaining code expensive components of the total project cost.

The rise of open source hardware abstraction layers and libraries such as Arduino, MBed, and even vendor specific libraries such as LPCOpen has made developing embedded systems a lot easier. These libraries can be utilized instead of an expensive toolchain, which offers a compiler and an extensive range of libraries, such as Keil or IAR. Keil, IAR, and other commercial tools have fantastic compilers; however, for a startup or a small business, they might be out of their initial budget before the project has become profitable. These abstraction layers and libraries can substantially speed up firmware development, however, for a new or infrequent firmware developer, these still have the considerably steep learning curve of C/C++. Furthermore, they may lack sufficient debugging features, such as stepping through the code line by line with an attached debugger with full state inspection, as they are web-based or use a limited development environment.

Getting a product to market quickly can significantly impact sales volume. Conversely, if you’re only building a one-off or doing a minimal production run, every hour spent developing firmware can dramatically increase the cost-per-unit of the device. The same goes for making a proof-of-concept device you might take to an investor, to upper management or, as a freelancer, to your client. The longer you spend writing firmware, the higher the cost and project risk become.

Running a high-level language such as C#, Java, Python, LUA, or even JavaScript can markedly reduce development time for both simple and very complex projects. Higher level languages handle memory management and protection (overflow of arrays, for example), and some languages, like C#, provide an incredibly comfortable debugging experience. Not to mention, finding tutorials for the general language features of these languages is typically easy given their extensive user communities. Debugging capabilities are especially important for an inexperienced embedded developer, as debugging a project could be more time consuming than actually writing the code if the debugging tools are sub-par.

ARM Cortex processors are incredibly powerful at economical prices and come with plenty of RAM, flash space, and high clock speeds. You can pick up an ARM Cortex M0+ which runs code at around three times the speed as the old (23 years!) ATMega AVR that is popular with the Arduino community for about a third of the price at the same volume. These characteristics have led to their enormous popularity in the industry and amongst hobbyists alike. While ARM Cortex-based processors can be much easier to debug than older devices, potentially offering tremendous time savings, they can be more complex to write code for, even with libraries like MBed and Arduino’s ARM ports.

If you spend a little more money on the microcontroller (just a couple of extra dollars), you can get some truly powerful systems that are more than capable of running higher level languages like Java, C#, Python, LUA, and Javascript. These languages are likely to be more familiar and simpler to use, and therefore offer shorter development times for inexperienced or non-embedded developers. Looking back to the low volume/prototype use case, an extra $1 to the per-device cost won’t break the budget, as it could be much cheaper than even a single day of additional firmware development time. With the slightly more expensive microcontroller, you can run .NET nanoFramework and develop in C# with the full Visual Studio debugging experience, complete memory management, and ultra powerful language features that notably reduce the amount of code to be written (and debugged!)

There are some drawbacks to running a high-level language on a microcontroller or microprocessor compared to writing C/C++ code. Depending on your project, these drawbacks could be a significant problem or have no impact at all.

  • Execution Speed: High-level languages will typically run on an interpreter rather than compile to native code. If you’re building a real-time system or need incredibly exact timing, then this rules out high-level languages from the start. However, this won’t be an issue if you consider a five to ten millisecond response time acceptably fast. Some languages support running native C, C++, or assembly code along with the high-level language code, which may mitigate this issue almost entirely.
  • Code Size: Because you need to include the interpreter and potentially a large number of library functions and features, the code size can be substantial before you even blink an LED. Compiled code size typically won’t be an issue as a microcontroller variant with more flash space is not much more expensive.
  • Memory Usage: Again, because of the interpreter you’re likely going to be using much more memory than the same functionality implemented in a language which compiles down to a native format. Also, a processor variant with more memory isn’t much more expensive.
  • Hardware Features: You’re limited to the hardware peripherals on your device that the community or project team have implemented for the platform you’re running. If your device has Bluetooth hardware, but the language and libraries don’t support Bluetooth, you might not be able to use it at all.
  • Supported Microcontrollers: Only specific microcontrollers/microprocessors are supported by each platform, as the platform code generally needs to be ported to each new target. You probably need to decide on a programming language before you start hardware development to know which processor you’ll be using in the project.

That being said, there are some considerable advantages to running a high-level language on a microcontroller too.

  • Development Time: The time to write an application will typically be shorter, especially for an inexperienced developer. The amount of code that needs to be written is reduced, making it easier to maintain.
  • Debugging Experience: This doesn’t apply to all platforms/languages, but those that do support debugging over USB will typically offer a good developer experience. This allows the developer to inspect variable values, step through the code line by line, and control execution. You can do this with C/C++ on an ARM Cortex processor using a hardware debugger (Single Wire Debug – SWD), but you’ll need to be using Eclipse or another desktop IDE rather than Arduino, MBed or any web-based IDE.
  • Memory Management: Correctly managing memory can be difficult, especially in complex projects. High-level languages take care of this for you.
  • Software Feature Availability: In contrast to the lacking hardware features, software features such as graphics on a screen or networking might be very well supported, linking back to Development Time.
  • Code Sharing: If you’re developing a web service to work with the embedded device in the same language, you can probably share code for objects and business logic between the two.

If you’re new to embedded development or don’t have any programming experience and feel as though you have been thrown in the deep end, a high-level language can save your project. Those languages are easier to pick up, get the job done faster, and allow for fewer bugs. The tutorials available for them running on a desktop environment are usually wholly applicable to the embedded environment, so there is no shortage of educational material.

I feel C# via the .NET nanoFramework (.NET nF) is one of the most powerful languages you can run on a microcontroller. This is the community successor to Microsoft’s .NET Micro Framework, which is no longer maintained. The .NET nF allows interop, which means you can run native code on the microcontroller for processor intensive or real-time functions.

Currently, the cheapest and most resource-constrained microcontroller that .NET nanoFramework runs on is the STM32F091, at a bit under US$2.50@100qty depending on your distributor and exact model. That gives you a 32bit 48MHz processor with 128kB of flash and 32kB of RAM. By comparison, an ATMEGA328 will set you back around $1.10 to $1.25 depending on package and distributor for an 8bit 20MHz processor with 32kB of flash and 2kB of RAM. Compared to the Arduino IDE, Visual Studio has the highest market share of any development environment and is incredibly powerful. The .NET nF integrates fully with Visual Studio, including the free community edition, so you can have the full debugging experience over USB, allowing you to catch exceptions, step through code and inspect (and change!) object values.

C# is one of the top five programming languages in use today, and arguably the most used in professional environments in all of web, desktop, and even mobile. So there’s an excellent chance a developer on your team has more experience with C# than with C or C++. You can likely use data classes written for a web or desktop application directly in an embedded project if they are both written in C#.

While certainly not lacking in features, the .NET nanoFramework doesn’t have the complete functionality of the ‘full’, ‘standard’ or ‘core’ Common Runtime Language (CLR). The heavily constrained resources of a microcontroller relative to a full computer mean some features are not feasible to implement. The .NET nF community is primarily working to implement mscorlib and the Universal Windows Platform (UWP) features. In addition to the aforementioned, some language/library features that I feel will save a large amount of time in an average project are:

  • Very easy string manipulation and handling.
  • Parsing and building XML and JSON.
  • Accessing device storage (SD card and flash memory devices)
  • Networking (direct sockets, web requests, etc.)
  • WiFi (with external modules, or directly on an ESP32)
  • Hardware ‘drivers’ and other libraries available via NuGet

Using a language like C# via the .NET nanoFramework can greatly accelerate project development for IoT devices, sensors, wearables, and many other devices. While you couldn’t build a three-phase brushless motor controller with C# due to the timing constraints, the vast majority of projects can benefit from its rapid development pace. 

Upcoming projects on this blog will make good use of the .NET nanoFramework and serve as getting started guides and real-world project examples. While it’s a fantastic high-level language to use, there are many other languages you can run on a microcontroller. If you’d like to see some projects or articles focusing on them, let me know in the comments below!»

Check it out and join our Discord community for discussion.

Deployment image generator

nanoFramework Visual Studio extension (both VS2017 and VS2019 versions) just got a new improvement: the ability to generate a “deployment image”. And what the heck is a “deployment image” you ask? Let me explain with a bit more detail.

To run a C# application a nanoDevice must have on its storage (which is a flash memory on all current target platforms) the following bits:

  • The nanoBooter responsible to perform the cold boot of the CPU and provide a mean to perform updates and a safe place when the CLR is messed up for some reason. This is valid unless the platform has to use it’s own, which is the case with ESP32 and the recently added TI CC32200SF.
  • The nanoCLR which comprises the IL execution engine, the CLR and the native parts of all the class libraries that the device has support for.
  • The configuration block. Optional and only required for devices that are network enabled to store network related stuff. Specifically, network configurations, certificates and Wi-Fi profiles.
  • The PEs (portable executable) of the managed application and the assemblies it references, like class libraries or others.

All those are made available in different occasions and flashed using different tools.

  • nanoBooter and nanoCLR are made available after compiling the nf-interpreter or by downloading them from our Bintray repository. These are flashed using the appropriate tool for the platform. This would be the nanoFramework flasher tool for ESP32, ST-LINK Utility or DfuSe for STM32 and Uniflash for TI.
  • The configuration block is written by the C# application itself or by using the Network Configuration dialog box available in the Visual Studio extension.
  • The PEs of the managed application (and referenced assemblies) are available upon a successful build of a Solution in Visual Studio. Those are deployed to the nanoDevice when hitting the “Deploy” option for the C# Project or when starting a debug session on Visual Studio.

If one is using nanoFramework to deploy stuff on a couple of boards the above is quite all right and works perfectly. Now, if you need to commission several boards on a production run or for a field test using a few dozen boards, things start to get complicated and time consuming. You’ll need to, at least, use a couple of different tools and possibly connect and disconnect each board a couple of times.

There is already a very handy feature which is the generation of a binary file with the bundle of all the PEs (application and referenced assemblies) that can be used to flash the “application” on a target that has already been loaded with the nanoBooter & nanoCLR. Yes, that’s the .bin file with the assembly name that you can find on the project output folder after a successful build.

nf-app-binary

We the recent improvement this was taken to the next level: generating a complete deployment image. Yes, you read it correctly, COMPLETE. With all the above: nanoBooter, nanoCLR, PEs and configuration block.

The nanoBooter, nanoCLR and configuration block are dumped from the connected target flash and cached (because that’s a lengthy operation). When the deploy operation occurs (and the PEs are generated and made available) the Visual Studio extension cooks everything together and spits a nice binary file with the full package.

nf-deployment-image-bin.png

This is working right now for STM32 targets only. Support for other platforms where this programming approach is suitable will follow.

The binary file can be flashed into the device using ST-LINK Utility or, if you’re working with an mbed enabled ST board (which all recent DISCOVERY and NUCLEO boards are), you can just drop it at the disk drive that shows on Windows Explorer and that’s burned in the flash in a couple of seconds. How cool is this?

Enough technicalities let’s see this working! 🙂

Enable the deployment image generation in the (new) settings dialog available in Device Explorer.

nf-enable-deployment-image-generator.png

You can optionally include the configuration block coming from the “seed” device. If you don’t, the region corresponding to the configuration block will be flashed with an empty content.

Just bellow this, you can find the configurations about the flash dump files cache. The options are: caching to the project output folder (which is the default) or in a location that you can specify.

All the above settings are persisted in you Visual Studio user account.

When you hit deploy or start a debug session, the magic happens, and the image is generated.

Something worth noting: the flash contents need to be dumped when they are not available on cache and this can take a few seconds before it completes. So be patient and know that this will happen only once for each device.
If you have a lot of devices and work on several projects at the same time, it’s probably wiser to use a central cache…

Now that we have the deployment image, let’s check it out.

Fire up ST-LINK Utility and connect the board. I’m using a STM32F769I-DISCO for this example.

nf-st-link-utility-connected

The board is already flashed with nanoBooter and nanoCLR and a debug session has been started previously with the Blinky application (meaning that the required PEs are there too). So, it has everything needed to blink the LED. Which it’s doing.

[wpvideo JCaz3qT8]

Next let’s load the file with the deployment image binary.

nf-st-link-utility-open-file

ST-LINK Utility has a nice comparing tool that we’ll use to compare the contents of the deployment image binary and what’s in the device flash.

nf-stlink-compare.png

We need to compare all the flash content so we’ll enter the flash start address next, which is 0x08000000 for this device.

nf-stlink-compare-start-address.png

The file contents are loaded next and the compare operation starts. I’m guessing that ST-LINK is very thorough on this because the operation takes a lot of time to complete.

nf-stlink-compare-progress.png

And voilà, if there were any doubts, the machine outputs it’s conclusions and confirms that both contents are identical!

nf-stlink-compare-success

Last test with this: use the disk drive flashing method.

Start by wiping the flash content using the “Full Chip Erase” command in ST-LINK Utility toolbar.

nf-stlink-full-chip-erase

After this you can disconnect the device.

Open Windows Explorer and locate the drive.

nf-mbed-disk-drive.png

Now open the folder where the deployment image file is located (that should be the project output directory) and simply drag and drop it on the drive representing the connected target. The upload will take a few seconds. After that operation is completed the MCU is reset, nanoBooter loads nanoCLR and finally the managed application is executed.

[youtube https://www.youtube.com/watch?v=jPheVSzfxu0&w=560&h=315]

This can be useful in many situations, like in production runs to load a test application, testing a bunch of boards, keeping a snapshot of a deployment, etc.

If you have an idea on how any of this can be improved, let us know, just send us a Tweet or join our Discord community. 😉

Have fun with nanoFramework!

To deploy, or not to deploy, that’s the question…

nanoFramework class libraries are composed of a managed part (written in C#) and the respective counterpart (written in C/C++) that is part of the firmware image that runs on the target.

As part of the usual development cycle there are improvements, bug fixes and changes. Some of those touch only the managed part, others only the native and others both. If the interface between them is not in sync: bad things happen.

In an effort to improve code quality and make developer’s life easier, two mechanisms have been implemented along the way. One was to have version numbers on both parts (that have to match) and the another was an improvement introduced on the deployment workflow that checks if the versions of the assemblies being deployed match the ones available in the target device. Pretty simple and straightforward, right?

Despite the simplicity of the mechanism, it has some draw backs. Occasionally it gets confusing because of apparent version mismatches caused by the fact that the develop branch is being constantly updated and the versions are bumped. Also, because the preview versions of the NuGet packages are always available in nanoFramework MyGet feed and not always on NuGet.

The other aspect that is not that positive is the need to have the versions in sync, which means that whenever the class library assembly version changes the native version must be bumped too. And that requires flashing the target device again. Most of the time without any real reason except that a version number was changed.

This is changing for better!

As of today, the native version number is independent of its managed counterpart. Meaning that it will change when it makes sense and not just because it has to mirror the managed version. Some of you are probably thinking right now that this will be a step back and has the potential to cause trouble with obscure version mismatches and deployment hell. This won’t happen because of something else that we are introducing on all class libraries (and base class), an Egg of Columbus, actually. There is new assembly attribute on all of those that declares which native version is required. Simple but effective.

This is will be used from now on by the deployment provider to validate if the target has all the required assemblies and versions to safely deploy the application.

Because these introduce several breaking changes, make sure you update the VS extension (2017 and 2019 versions available) and the NuGet packages on the Solutions you’re working for a smooth transition.

Happy coding with nanoFramework!

Support for Visual Studio 2019: check!

Last week we’ve published nanoFramework extension for Visual Studio 2019. Right on time before the official launch event on April 2, 2019. 😉

Being Visual Studio a corner stone in nanoFramework development experience, we wanted to show not only our commitment to our growing community by enabling them to keep up with Visual Studio release schedule, but also our appreciation to the Visual Studio team for providing us such an awesome tool.

With the release of the VS2019 version the extension for VS2017 will enter in maintenance mode. This means that we won’t be adding any new features to it. Only bug fixes and we’ll port also any fundamental changes that are required to keep up with new features that cause breaking changes.

As for the VS2019 extension, lots of good stuff is on the queue. We have plans to improve network configuration dialog, provide target updates right from Visual Studio and support for Unit Test.

Now go watch the launch event, install nanoFramework extension as soon as you get your VS2019 setup and stay tuned for exciting developments.

Follow us on Twitter, subscribe our YouTube channel and rate the extension on the marketplace. Do send your comments and suggest new features.

To wrap up, the usual punch: feel free to contribute with code, samples or any productive work to help the project and the community.

Have fun with the new VS2019 and nanoFramework!