When computers just started to take on, computer security as such did not exist. There was no reason for it. Back then, hackers were simply people who knew the systems inside out and were not afraid to use their knowledge. But still with good intentions.
Later, when crackers found ways to exploit vulnerabilities, the need for security arose. It was not easy, as security was not built in into the systems but was rather handled as an add-on. First make the system work, then – hopefully – make it secure. Unfortunately, this was a very bad approach full of pitfalls. Security must be interwoven not only into the design, but also to the implementation of a computer system! Every developer must be aware of the implications a small mistake can bring and the gates it can open for the bad guys to march in.
But we have learned from our mistakes.
Or have we?
50 years ago, there were no airbags in cars. No ABS, no ASR, no multimedia systems, no automated parking, no tire pressure monitors. No computers to hack. When computers were introduced, safety was the main concern, security did not play a vital role. After all, “who would want to compromise the safety systems of a car”. Since then, cars turned into computers on wheels and so rose a new target for the bad people.
And the same is happening with Internet of Things.
It is very convenient to have small microcomputers trying to satisfy your needs like little servants. You want to know the weather? Just ask your smart speaker. You want to do more for your health? Wear a smart watch. You want to make sure, that you did not leave the lights on when you left home? Install smart lights. You want to arrive home to a nice cozy warm house? Use smart thermostats notified by your phone when you are almost home. And we could go on and on…
The common thing in these devices is that they run code. Thus, they have bugs. And therefore, also vulnerabilities. Not just the devices, but the backend as well.
The developers who craft the code for the devices and the backend may not be security specialists. And this is fine. But they have to start listening to the specialists and try to acquire a security mindset by attending specialized role-based training from professionals.
Let us give you a few examples:
A lightbulb… what could possibly go wrong, right? But today it is more a smart light node than just a lightbulb. What can the attackers achieve with it?
Just think about it. It’s the Internet of Things. Lightbulbs are connected to a network. An attacker can use wardriving or even warflying with a drone to hack the smart lights‘ communication protocol. As the controller is connected to the local network, it is trivial to move further and compromise any computer connected to the same network.
And lights are not the only Internet of Things devices which got smart. While some people complain that they cannot switch off the recording indicator lights of their cameras from settings, there is an obvious security trade-off behind it. Understandably you do not want an intruder to see when they are recorded. On the other hand, this is the only way to guarantee that a hijacker will not be able to use the camera without you noticing it. This is a correct decision that ensures the right balance between security and privacy.
Also, there are the smart locks. Yes, it is reassuring to be able to check on the way to work whether you forgot to lock the door – it is also convenient to let your friends in while you got stuck in the traffic driving home. But what you can do, a well-prepared offender can do as well. And you would not want burglars to let themselves in using a software vulnerability, would you?
But there are cases when there is even no need for exploits and hacking. Using a device as designed may also give enough causes for concern. Like Apple’s AirTags are ideal for stalking – secretly spying on somebody’s private life via their device. They are small, can be easily hidden on any victim and unlike any other similar product, the Apple ecosystem guaranties successful stalking.
Even RFID chips built into passports can be abused e.g. by terrorists to detonate a bomb in the presence of a given nationality. Or there was the case when soldiers exposed nuclear weapons secrets by using learning apps with data accessible to anybody.
And talking about stalking, did you know that one can track a car via the tire pressure sensors (TPMS)? To make things worse, these are mandatory equipment on new cars sold in the US and many other countries.
And so on, and so forth.
Remember, security is not something you can just slap onto your product just before releasing it. It should not be just an after-thought, it has to be built in from the beginning of the software development life-cycle.
As Murphy’s law of coding states: Every non trivial program has at least one variable, one branch, one loop, and one bug. To minimize the number of bugs, developers and architects must follow best practices. That does not only minimize the likelihood of bugs, but also the consequences of them. Security by design is important, but keep in mind: even the best designed system can be vulnerable if it contains a single exploitable implementation mistake. A bug.
Aligned to the above, not only an Internet of Things device, but also any system is as secure as its weakest link. And there is an obvious consequence of this: it is the average preparedness of all developers that counts. They all must understand the reasons of secure coding and the implications of not doing it. Secure coding is a team sport.
But don’t worry, we are here to help. Specifically for Internet of Things, there is a hands-on Secure coding in C and C++ course, that can come in various variants (such as ARM or Intel); check out all our courses related to C/C++ in our catalog.