BLOG
Don’t Let the Chip Shortage Dictate Your Product Strategy
Microcontrollers and Microprocessors: The Brains Behind Smart Devices
All smart devices leverage some kind of central processor to act as the “brain” of the device. Broadly speaking, most of these processors fall into two categories: microcontrollers and microprocessors. A microprocessor is an integrated circuit (IC) which only contains a central processing unit (CPU). A microcontroller, on the other hand, packages a CPU, memory and input/output (I/O) all in the same IC.
Since microprocessors have less “stuff” on the same chip they typically have a more powerful CPU and are able to run modern operating systems such as Linux, Windows, or macOS.
Microcontrollers have less powerful CPUs and are often based off old technology. They don’t have the horsepower to run a modern operating system. Instead they’re driven by a small package of application specific code.
The Chip Shortage Is Fueling a Push to Abandon Microcontrollers
During the chip shortage, applications processors are getting much more love from manufacturers. They are higher margin devices which live at the edge of technology and are sold into some of the most popular (and expensive) devices out there (smartphones, laptops, smart watches, etc). This has led some to suggest that we should abandon microcontrollers in favor of more widespread use of applications processors.
What this viewpoint misunderstands is that not every product needs the horsepower of an applications processor. In fact, in many products a microcontroller will outperform an applications processor in the most important metrics. If you need real time control, nothing beats the speed of a microcontroller. Also, these less expensive parts keep costs down in a huge number of day-to-day devices. This is especially important in these times of rising inflation, when being able to keep costs low while still delivering a quality product is a key way to build a loyal customer base.
How to Optimize User Experience If You Choose Microcontrollers
So, how do you stay “cutting edge” while still utilizing low-cost, older microcontrollers? The key is in how you apply them. Much of this comes down to three things: peripherals, your software stack and user experience.
Peripherals are the things your microcontroller is controlling, or reading data from. By applying the latest advancements in sensor technology, you can make your customers feel like the future is in their hands.
Your software stack also plays a huge role here. By using highly-portable real time operating systems (such as Zephyr) you can keep your software stack “portable” allowing you to react to changes in chip availability without having to rewrite all your firmware.
The final, and arguably most important consideration, is the user experience. Just having a “pretty” app isn’t enough. You need to have a holistic approach, which creates a seamless user experience between the hardware and the app. Rich but simple data displays, painless onboarding, and custom metrics are just a few ways to give a cutting edge experience to your customer without needing to run on expensive hardware.
Ultimately, You Need To Choose the Right Chip for the Job
In the end, the main goal is to build the right device for your customer that provides them the best user experience at the right price point. This may be something that runs embedded Linux on an expensive applications processor, or it could be a device running Zephyr on a low-level microcontroller. Neither is “better” than the other, as long as they are applied appropriately. This advice cuts across industries; from consumer goods to industrial products. Every one of your customers deserves a product that both delights and excites them.
Interested in learning more from our IoT hardware engineering team? Visit our blog for more insights.