A recent study conducted by British researchers has revealed that artificial intelligence models can accurately decipher what users are typing on their computers, including sensitive information like passwords, this is achieved by listening to the sounds of typing on the keyboard and analyzing them.
Long Road to Agreement:
The study, presented at the European Institute of Electrical Engineers (IEEE) conference on security and privacy, warns that this technology poses a significant threat to user security, it can potentially extract data through the built-in microphones of electronic devices used throughout the day.
How Does This Technology Work?
Researchers developed an AI model capable of recognizing the sounds of typing on an Apple MacBook Pro keyboard, after training the model using recorded keypresses from a nearby smartphone, it achieved up to 95% accuracy in identifying the pressed key based solely on the sound of the keypress, even when using sounds collected during Zoom conversations to train the sound classification algorithm, accuracy only decreased slightly to 93%, which remains a high and concerning percentage.
Data Collection and Processing:
Data for training was collected by pressing 36 keys on a MacBook Pro keyboard 25 times per key using various fingers and varying levels of pressure, the resulting sounds from each keypress were recorded through a nearby smartphone or Zoom calls conducted via the computer, researchers then generated spectral waveforms from the recordings, highlighting distinct differences for each key, and employed data processing techniques to enhance the signals used for key sound identification.
Risks of This Technology:
As video conferencing tools like Zoom become more prevalent, devices with integrated microphones are widespread, and AI technology advances rapidly, these attacks could potentially gather significant user data, including passwords, discussions, and messages, with ease.
Unlike other side-channel attacks that have specific requirements and data rate constraints, these sound-based attacks are much simpler due to the abundance of microphones in devices capable of recording high-quality audio, this becomes more concerning with the rapid progress in machine learning.
Mitigation Strategies:
The researchers advise concerned users to alter their password typing patterns, this can be achieved by using a mix of uppercase and lowercase letters, numbers, and symbols by employing the shift key, they also suggest using biometric authentication or password management apps to avoid manually inputting sensitive information.
Other potential defense measures include using software that reproduces keypress sounds or employing white noise to interfere with the sound of keyboard keypresses.
“In addition to the mechanisms suggested by the researchers, a spokesperson from Zoom commented on this study to the website BleepingComputer, they advised users to manually adjust the background noise suppression feature in the Zoom application to reduce its intensity, they also recommended muting the microphone by default upon joining a meeting and muting the microphone while typing during the meeting to assist in securing user’s information and protecting it from such attacks.”