Future Electronics – Graphics Display-Based Human-Machine Interfaces: the New Capabilities of the Latest MCUs

By: Justin Palmer, Vertical Segment Director, Embedded and Healthcare, Future Electronics

The evidence from consumer research, analyst reports, OEM customer feedback and the forecasts from semiconductor manufacturers all points in the same direction: few embedded designers will be immune over the next five years from pressure to dramatically enhance the capabilities, mode of operation and appeal of the Human-Machine Interface (HMI) in their products.

Although the move to create more graphical and touch-sensitive interfaces was largely initiated in devices such as smartphones and tablets, the demand for such a rich user experience has expanded far beyond the consumer market. In fact, products for the industrial, automotive, medical, military and aerospace markets are all facing the same requirement. Several factors are driving the revolution in HMI design:
• Sensors, processors and wireless devices have become much better and much cheaper at the same time, greatly enhancing systems’ ability to measure and track their own operation.
• A generational shift has taken place in the user base which requires product manufacturers to meet the expectations of millennials rather than baby boomers.
• A color TFT display costs less now than a monochrome STN display cost just five years ago. Touchscreen overlays have also become both better and cheaper, with capacitive touch-sensing technology now widely available, and offering a better and more interactive interface than older resistive technology options.
• Companies have discovered the scope to improve efficiency and reduce operating costs when equipment has an easy-to-use and intuitive interface. They benefit both from a lower requirement for training and from a reduced incidence of human error.

In the past, redesigning an embedded product’s HMI to feature more and better graphical content would have been out of the question for systems that were based on a microcontroller. There used to be a sharp divide between, on the one hand, embedded systems based on a microprocessor with sophisticated graphics capability, and a rich operating system such as the Windows® or LinuxTM platforms; and on the other hand, those based on a microcontroller, often with no operating system and typically running nothing more complex than a segment LCD.

The ground is shifting fast, however, and improving MCU capabilities gives design engineers hope that they can stay one step ahead of their customers’ changing expectations without having to abandon their familiar and productive MCU platform. So how much scope are MCU manufacturers offering their users to dramatically improve the HMI’s functionality?

How and Why the HMI is Evolving
Before looking at how system designers might implement an improved HMI, it is worth understanding why and how the HMI needs to be improved.

The fundamental underlying cause of the shift in HMI design is the development of new and improved semiconductor technology. Sensors, RF transceivers and microcontrollers have become so powerful and yet so cheap that it is possible for OEMs to embed them in greater numbers than ever, and in more devices than ever. In factories, this enables factory-automation systems to track all important parameters of both the manufacturing equipment and the manufactured product in real time, at any point in the production process. In medicine, it enables health professionals to remotely monitor a patient’s condition constantly, and to set alerts when critical thresholds are crossed.

Pg30_TV3

Figure 1. The Tesla Model S dashboard – a response to modern users’ preference for graphics-rich control interfaces. (Image credit: Steve Jurvetson under Creative Commons 2.0 license)

The result is that vast amounts of data are being generated and transmitted to control units. As the Internet of Things gains traction, this data is increasingly being hosted online in the cloud, where it may be aggregated and analyzed, and the results of the analysis displayed on any internet terminal anywhere. So the extent and types of data available to users are changing rapidly.

At exactly the same time, the make-up of the user base, and in particular of the workforce, is changing, as baby boomers go into retirement to be replaced by ‘millennials’, the generation starting with people born in the early 1980s, and by later generations. These people are digital natives, accustomed since childhood to interact with computers and displays (see Figure 1).

Interestingly, the preferences and working style of millennials are different from those of baby boomers. Whereas baby boomers expected to be trained to implement a process, and measured on their execution of it, millennials expect to understand a system, to track it with real-time data, and to make their own decisions based on the data rather than following a set process.

So now we have masses of data generated by sensors, the ability via the internet to communicate and share it in real time, and the people with the native ability to process and use it. Clearly, simple segment LCDs and push-button inputs do not fit in with this mode of interacting with complex equipment.

Displays Must Present Menus of Data to Users
The key factor is the availability of Big Data, and the extraordinary value which can be derived from its use. In fields as diverse as intensive medical care and predictive maintenance of machines, it is the patterns discoverable in multiple streams of data or multiple parameters which provide the most valuable insights.
And humans can most easily and most quickly discover these patterns visually; we learn more about complex data sets from diagrams, graphs and charts than we do from hundreds of lines of letters and numerals.

To enable millennials to do what they do well – making intelligent decisions based on rich, real- time data – embedded devices should present information graphically, and allow the user to interact with it intuitively. Systems, then, need graphics capability and must support touch-sensing interfaces.

The most sophisticated graphical systems, capable of handling video streams in high definition, for instance, will run on a high-performance MPU such as the i.MX family® from NXP Semiconductors, based on ARM Cortex®-A processors operating within a Linux® or AndroidTM environment. Such systems are complex and expensive in both software and hardware terms, and present considerable implementation challenges for those not versed in development on a rich Operating System (OS).

More and more embedded systems, however, are based on a microcontroller platform. And of course MCU users would always, if possible, prefer to remain as MCU users rather than migrating to an MPU. The MCU is familiar, it supports the C language for application-code development, and it enables the reuse of legacy systems running on the same platform. In short, the upheaval involved in migrating from an MCU to an MPU can be immense, but is potentially avoidable.

So how closely can a system with an MCU architecture emulate the sophistication and performance of an MPU-based HMI?

Today, STMicroelectronics promises users of its 32-bit STM32F7 MCUs, which are based on an ARM® Cortex®-M7 processor core, that they can support up to an XGA display screen with high-definition 1024 x 768px resolution. The STM32F7x7, STM32F7x8 and STM32F7x9 series all include an on-board TFT display controller and JPEG image codec (see Figure 2). All STM32F7 MCUs also include ST’s Chrom-ART AcceleratorTM for graphics, to enable high-speed rendering of graphics without any overhead on the main processor. This graphics accelerator creates content twice as fast as the core alone can do. As well as providing for fast rendering of raw 2D data, the Chrom-ART Accelerator also supports extra functions such as image format conversion and image blending, providing the MCU user with the capability to implement some sophisticated graphics effects.

Pg30_TV

Figure 2. STMicroelectronics’ 32F769IDISCOVERY develop- ment board for the STM32F7x9 series of MCUs includes a 4” LCD touchscreen. (Image credit: STMicroelectronics)

On-board Flash memory of up to 2Mbytes and 512kbytes of SRAM provide plenty of capacity for graphics data storage and the scratchpad memory required by the Chrom-ART Accelerator. A MIPI-DSI interface in the STM32F7x9 series MCUs can also be useful in graphics-rich applications, as it provides a direct channel to devices such as image sensors and cameras.

Other MCU manufacturers provide similar levels of graphics and display controller capability in their high-end devices. Microchip’s PIC32 MX3 and MX4 series are intended for embedded applications with a high performance graphics display. They support TFT and OLED display screens up to WVGA (800 x 480px resolution), and integrate Microchip’s touch-sensing control technology.

Microchip offers particularly good development support for graphics applications, providing a free graphics library, and its intuitive and easy to use Graphics Display Designer development tool.

NXP Semiconductor’s LPC5460x and LPC54S60x families of ARM Cortex-M4-based MCUs are also optimized for rich HMI applications. They support a graphics LCD with resolution up to 1024 x 768, and offer options to easily connect and manage external QSPI Flash memories to store large images or pieces of code. NXP also provides a good ecosystem including graphics libraries, such as Segger emWin, provided free of charge.

Cypress Semiconductor also has a long heritage in the field of graphics display control – it is a leader in the market for highly integrated controllers for vehicle instrument clusters, which today often feature 2D or 3D graphics display screens.

For industrial equipment and home appliances, the FM4 family of MCUs offers a wide choice of features and capabilities. In particular, the S6E2D series of ARM Cortex-M4 MCUs, which is part of the FM4 family, is aimed at applications containing a full-color TFT-based graphical display; its graphics engine is derived from that used in the Traveo range of MCUs for instrument clusters.

Offering 512kbytes of video RAM as well as the graphics engine, the S6E2D supports complex image overlap, mirroring, scaling and image movement with minimal overhead on the Cortex-M4 core. It can implement sophisticated and impressive graphics at much lower price points then competing solutions.

New System Requirements in Move to Graphics Displays
The good news, then, is that many MCU manufacturers offer existing users a migration path up to their high-end devices through which they can implement very sophisticated, full-color graphics displays, even supporting some moving content, and providing high resolution up to Full HD. Extremely sophisticated display-based HMIs which meet the needs of millennial users can now be implemented without requiring a wholesale move to an MPU-based architecture and a full-featured operating system.

But designers implementing a sophisticated graphics display for the first time will find that:
• The complexity of their system increases dramatically
• Timing windows shorten and scheduling creates considerable challenges
• The memory requirement scales up hugely, resulting in a need for memory management

There is no question that an embedded application with a sophisticated HMI therefore requires the use of a real-time OS (RTOS) to provide a framework for scheduling and prioritization, and to implement memory management. A wide choice of RTOS options is available, and a system such as FreeRTOSTM is – as its name suggests – free to use, and benefits from board support from most MCU manufacturers.

Designers will also need to take advantage of the support that MCU manufacturers provide for third-party graphics design tools. Segger’s emWin design and simulation tool, for instance, is provided free by ST and NXP to users of its MCUs.

Pg30_TV2

Figure 3. Microchip’s 3DTouchPad demonstrates its GestIC gesture-recognition technology. (Image credit: Microchip)

It is also worth taking note of the trend to enhance the HMI not only with advanced graphics capabilities but also with gesture control and with improved ability to use audio inputs and outputs. Microchip provides interesting capabilities in gesture control with its GestIC® technology (see Figure 3). And in audio user interfaces, XMOS in particular has been doing pioneering work in far-field microphone management implemented in its xCORE-VOICETM processors, offering a means to provide voice control of electronic equipment in all environments.

Leave a Reply

Your email address will not be published. Required fields are marked *

Protected by WP Anti Spam