FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Proton Is Trying to Become Google—Without Your Data

By Gilad Edelman
The encrypted-email company, popular with security-conscious users, has a plan to go mainstream.

Smarter Homes & Gardens: Smart Speaker Privacy

By Natalie Maxfield

So is your smart speaker really listening in on your conversations? 

That’s the crux of a popular privacy topic. Namely, are we giving up some of our privacy in exchange for the convenience of a smart speaker that does our bidding with the sound of our voice? After all, you’re using it to do everything from search for music, order online, and control the lights and temperature in your home. 

What is your smart speaker really hearing—and recording? 

Let’s take a look at what’s going on inside of your smart speaker, how it processes your requests, and what companies do with the recordings and transcripts of your voice. 

So, are smart speakers listening in? 

More or less, smart speakers are listening to all the time. Each smart speaker has its own “wake word” that it listens for, like Alexa, Siri, or Google. When the device hears that wake word or thinks it hears it, it begins recording and awaits your verbal commands. Unless you have the microphone or listening feature turned off, your device indeed actively listens for that wake word all the time. 

Here’s where things get interesting, though. There’s a difference between “listening” and “recording.” The act of listening is passive. Your smart speaker is waiting to hear its name. That’s it. Once it does hear its name, it begins recording for a few seconds to record your command. From there, your spoken command goes into the company’s cloud for processing by way of an encrypted connection.  

There are exceptions to when your command may go to the company’s cloud for processing, like Siri on iPhones, which according to Apple, “You don’t sign in with your Apple ID to use Siri, and the audio of your requests is processed entirely on your iPhone.” Also, Google Assistant may process some requests without going to the cloud, like “When a user triggers a smart home Action that has a local fulfillment path, Assistant sends the EXECUTE intent or QUERY intent to the Google Home or Google Nest device rather than the cloud fulfillment.” 

In the cases where information does go to the cloud, processing entails a few things. First, it makes sure that the wake word was heard. If it’s determined that the wake word was indeed spoken (or something close enough to it—more on that in a minute), the speaker follows through on the request or command. Depending on your settings, that activity may get stored in your account history, whether as a voice recording, transcript, or both. If the wake word was not detected, processing ends at that point. 

Enter the issue of mistaken wake words. While language models and processing technologies used by smart speakers are constantly evolving, there are occasions where a smart speaker acts as if a wake word was heard when it simply wasn’t said. Several studies on the topic have been published in recent years. In the case of research from Northeastern University, it was found that dialogue from popular television shows could be interpreted as wake words that trigger recording. For example, their findings cite: 

“We then looked at other shows with a similarly high dialogue density (such as Gilmore Girls and The Office) and found that they also have a high number of activations, which suggests that the number of activations is at least in part related to the density of dialogue. However, we have also noticed that if we consider just the amount of dialogue (in a number of words), Narcos is the one that triggers the most activations, even if it has the lowest dialogue density.” 

Of interest is not just the volume of dialogue, but the pronunciation of the dialogue: 

“We investigated the actual dialogue that produced Narcos‘ activations and we have seen that it was mostly Spanish dialogue and poorly pronounced English dialogue. This suggests that, in general, words that are not pronounced clearly may lead to more unwanted activations.” 

Research such as this suggests that smart speakers at the time had room for improvement when it comes to properly detect wake words, thus leading to parts of conversation being recorded without the owner intending it. If you own a smart speaker, I wouldn’t be too surprised to hear that you’ve had some issues like that from time to time yourself. 

Is someone on the other end of my smart speaker listening to my recordings? 

As mentioned above, the makers of smart speakers make constant improvements to their devices and services, which may include the review of commands from users to make sure they are interpreted correctly. There are typically two types of review—machine and human. As the names suggest, a machine review is a digital analysis and human reviews entail someone listening to and evaluating a recorded command or reading and evaluating a transcript of a written command. 

However, several manufacturers let you exercise some control over that. In fact, you’ll find that they post a fair share of articles about this collection and review process, along with your choices for opting in or out as you wish: 

Setting up your smart speaker for better privacy 

The quickest way to ensure a more private experience with your smart speaker is to disable listening—or turn it off entirely. Depending on the device, you may be able to do this with the push of a button, a voice command, or some combination of the two. This will keep the device from listening for its wake word. Likewise, this makes your smart speaker unresponsive to voice commands until you enable them again. This approach works well if you decide there are certain stretches of the day where your smart speaker doesn’t need to be on call. 

Yet let’s face it, the whole idea of a smart speaker is to have it on and ready to take your requests. For those stretches where you leave it on, there’s another step you can take to shore up your privacy.  

In addition to making sure you’re opted out of the review process mentioned above, you can also delete your recordings associated with your voice commands. 

Managing your voice history like this gives you yet one more way you can take control of your privacy. In many ways, it’s like deleting your search history from your browser. And when you consider just how much activity and how many queries your smart speaker may see over the course of days, weeks, and months, you can imagine just how much information that captures about you and your family. Some of it is undoubtedly personal. Deleting that history can help protect your privacy in the event that information ever gets breached or somehow ends up in the hands of a bad actor.  

Lastly, above and beyond these privacy tips for your smart speakers, comprehensive online protection will help you look out for your privacy overall. In the case of ours, we provide a full range of privacy and device protection, along with identity theft protection that includes $1M identity theft coverage, identity monitoring, and identity restoration assistance from recovery pros—and antivirus too, of course. Together, they can make your time spent online far more secure. 

You’re the smart one in this relationship 

With privacy becoming an increasingly hot topic (rightfully so!), several companies have been taking steps to make the process of managing yours easier and a more prevalent part of their digital experience. As you can see, there are several ways you can take charge of how your smart speaker uses, and doesn’t use, your voice. 

It used to be that many of these settings were tucked away deep in menus, rather than something companies would tout on web pages dedicated to privacy. So as far as smart speakers go, the information is out there, and I hope this article helps make the experience with yours more private and secure.  

The post Smarter Homes & Gardens: Smart Speaker Privacy appeared first on McAfee Blog.

Connected Car Standards – Thank Goodness!

By William "Bill" Malik (CISA VP Infrastructure Strategies)

Intelligent transportation systems (ITS) require harmonization among manufacturers to have any chance of succeeding in the real world. No large-scale car manufacturer, multimodal shipper, or MaaS (Mobility as a Service) provider will risk investing in a single-vendor solution. Successful ITS require interoperable components, especially for managing cybersecurity issues. See https://www.trendmicro.com/vinfo/us/security/news/intelligent-transportation-systems for a set of reports on ITS cybersecurity.

The good news is we now have a standard for automotive cybersecurity, ISA/SAE 21434. This standard addresses all the major elements of connected car security including V2X, reaching from the internals of ECUs and communications busses including CAN to the broader issues of fleet management and public safety. See https://www.iso.org/standard/70918.html for the current draft version of this standard.

Intelligent transport systems rely on complex, contemporary infrastructure elements, including cloud (for data aggregation, traffic analysis, and system-wide recommendations) and 5G (for inter-component networking and real-time sensing). ITS also rely on aging industrial control systems and components, for vehicle detection, weather reporting, and traffic signaling, some dating back forty years or more. This profound heterogeneity makes the cybersecurity problem unwieldy. Automotive systems generally are the most complex public-facing applications of industrial IoT. Any information security problems with them will erode public trust in this important and ultimately critical infrastructure.

Robert Bosch GmbH began working on the first automotive bus architecture in 1986. Automobiles gained increasing electronic functions (smog controls, seat belt monitors, electric window controls, climate controls, and so on). With each new device, the manufacturers had to install additional point-to-point wiring to monitor and control them. This led to increasing complexity, the possibility for error, extended manufacturing time, more costly diagnosis and repair post-sales, and added weight. See Figure 1 for details. By replacing point-to-point wiring with a simple bus, manufacturers could introduce new features connected with one pair of wires for control. This simplified design, manufacturing, diagnosis, and improved quality and maintainability.

Figure 1: CAN Networks Significantly Reduce Wiring (from National Instruments https://www.ni.com/en-us/innovations/white-papers/06/controller-area-network–can–overview.html)

The bus was simple: all devices saw all traffic and responded to messages relevant to them. Each message has a standard format, with a header describing the message content and priority (the arbitration IDs), the body which contains the relevant data, and a cyclic redundancy check (CRC), which is a code to verify that the message contents are accurate. This CRC uses a mathematical formula to determine if any bits have flipped, and for small numbers of errors can correct the message, like a checksum. This is not as powerful as a digital signature. It has no cryptographic power. Every device on the bus can use the CRC algorithm to create a code for messages it sends and to verify the data integrity of messages it receives. Other than this, there is no data confidentiality, authentication, authorization, data integrity, or non-repudiation in CAN bus messages – or any other automotive bus messages. The devices used in cars are generally quite simple, lightweight, and inexpensive: 8-bit processors with little memory on board. Any device connected to the network is trusted. Figure 2 shows the layout of a CAN bus message.

Figure 2: The Standard CAN Frame Format, from National Instruments

Today’s automobiles have more sophisticated devices on board. The types of messages and the services the offer are becoming more complex. In-vehicle infotainment (IVI) systems provide maps, music, Bluetooth connectivity for smartphones and other devices, in addition to increasingly more elaborate driving assistance and monitoring systems all add more traffic to the bus. But given the diversity of manufacturers and suppliers, impeding security measures over the automotive network. No single vendor could today achieve what Robert Bosch did nearly forty years ago. Yet the need for stronger vehicle security is growing.

The ISO/SAE 21434 standard describes a model for securing the supply chain for automotive technology, for validating the integrity of the development process, detecting vulnerabilities and cybersecurity attacks in automotive systems, and managing the deployment of fixes as needed. It is comprehensive. ISO/SAE 21434 builds on decades of work in information security. By applying that body of knowledge to the automotive case, the standard will move the industry towards a safer and more trustworthy connected car world.

But the standard’s value doesn’t stop with cars and intelligent transport systems. Domains far beyond connected cars will benefit from having a model for securing communications among elements from diverse manufacturers sharing a common bus. The CAN bus and related technologies are used onboard ships, in aircraft, in railroad management, in maritime port systems, and even in controlling prosthetic limbs. The vulnerabilities are common, the complexity of the supply chain is equivalent, and the need for a comprehensive architectural solution is as great. So this standard is a superb achievement and will go far to improve the quality, reliability, and trustworthiness of critical systems globally.

What do you think? Let me know in the comments below or @WilliamMalikTM.

The post Connected Car Standards – Thank Goodness! appeared first on .

❌