A researcher has discovered that so-called Smart TVs from Philips suffer from a number of serious security flaws that could allow hackers to not only steal information from attached USB sticks, and play pornographic movies as a prank, but also pilfer authentication cookies which could give them access to viewers’ online accounts.
As Ars Technica reports, the serious security problem was uncovered by Luigi Auriemma of the Revuln security research group.
According to Auriemma, a recent firmware update for Philips Smart TVs enabled a feature called “Miracast” which turns the TV into a Wi-Fi access point for the purpose of showing video content on nearby computers and smartphones.
Unfortunately, the authentication password for the devices beaming over their video content is hardcoded on the Philips Smart TV and no PIN is required to authorise a new Wi-Fi connection.
What is the password, you wonder? Well, here it is…
Some of the consequences of this security screw-up may not be that serious. For instance, it’s easy to imagine how someone could deliberately broadcast a pornographic or otherwise embarrassing video onto a Philips Smart TV, without the permission of the owner. Or they could meddle with the TV’s controls – changing channels or the volume level, for instance – without the TV’s viewers realising what was going on.
All the prankster would need is to be within Wi-Fi range of the television.
But other attacks are more serious, such as the ability to silently exfiltrate data on USB sticks attached to the TV.
Auriemma made a video, demonstrating ways in which the flawed Smart TV firmware could be exploited:
The impact is that anyone in the range of the TV WiFi adapter can easily connect to it and abuse of all the nice features offered by these SmartTV models like:
– accessing the system and configuration files located on the TV
– accessing the files located on the attached USB devices
– transmitting video, audio and images to the TV
– controlling the TV
– stealing the browser’s cookies for accessing the websites used by the user
– a lot more
As Ars Technica notes, the vulnerability was introduced in a firmware update released by Philips in December last year, and that there is no way for users to change the hard-coded password required by nearby devices to access the Miracast network.
Auriemma believes that all 2013 models of Philips Smart TVs are at risk because they use the same flawed firmware.
This revelation of lax security on the part of Philips highlights one of my key concerns about the “internet of things”.
Producers of devices that hook up to the internet must recognise that security needs to be at the top of their design checklist. To produce such devices without paying proper attention to security could backfire when users realise personal information is being leaked, or putting their online lives at risk.
Of course, this isn’t the first time that we have seen so-called smart televisions introduce privacy and security concerns.
Last year it was revealed that LG Smart TVs were spying on owners’ viewing habits, and grabbing information about files stored on attached USB devices.
The Wi-Fi Alliance has released a statement regarding the vulnerability reported in certain Philips Smart TVs:
“Wi-Fi Alliance takes security very seriously. All of our specifications and certifications include requirements to support the latest generation of security protections. In the case of Miracast™, the underlying specification requires device-generated passphrases to consist of characters randomly selected from upper case letters, lower case letters, and numbers.
“The recent report of a non-compliant passphrase implementation appears to be limited to a single vendor’s implementation. We enforce the requirements of our certification programs and have been in contact with the company in question to ensure that any device bearing the Miracast mark meets our requirements.”
Found this article interesting? Follow Graham Cluley on Twitter or Mastodon to read more of the exclusive content we post.
One comment on “Philips Smart TVs riddled with security and privacy flaws, researcher reveals”
"Producers of devices that hook up to the internet must recognise that security needs to be at the top of their design checklist. To produce such devices without paying proper attention to security could backfire when users realise personal information is being leaked, or putting their online lives at risk."
Well written. Unfortunately they don't teach secure programming to programmers let alone security in general (it really takes experience to learn it and even then there is the risk of being out of the loop and it takes only ONE mistake or ONE piece of information that is not known, to cause a problem) so if they cannot teach even basic stuff (e.g., checking for buffer overflows… which, well, they don't, last I knew… and if they do they don't stress it enough) to programmers of software there is really little chance that firmware writers or indeed (as you point out) devices in general (and whatever they enable/allow/disallow) will be any better. Scary but it is a reality. I would argue one thing to improve the quoted text, though: not just the internet. No, instead, security (which includes privacy!) should ALWAYS be considered (and in hindsight I realise you probably meant this but I am a VERY literal thinker). Think about it: while I find those suggesting turning off IPv6 "just" for "security" a flawed suggestion – because they should instead suggest the administrator learn IPv6 and its security TOO – there is the fact the opposite happening. That is to say, you currently only have IPv4 access so you only have IPv4 security practices deployed. Suddenly you get IPv6 access and, say, you restart a service – that might listen on the wildcard of the interface and the service is IPv6 capable already – or you reboot your server and now your services are listening on IPv6 but there is NO security plan…. well so too can non Internet -> Internet occur. On that note, there is also this to consider: theft of a device. In that case the Internet is irrelevant to the security of the device. So they should always keep it in mind but I'd be very surprised if this ever becomes even a de facto standard let alone a real standard.