With the Biden Administration announcing new guidelines for AI safety – including requiring innovators to share critical information with the federal government – it is clear that cybersecurity stakeholders must also defend against the serious threat AI poses to online security, privacy, and data protection.
Fortunately, Photolok IdP is available today and has been tested and found to protect against AI attacks. Photolok, a passwordless IdP, employs photos in place of passwords and uses OAuth for authentication and Open ID Connect for integration. To understand Photolok and how it protects against AI attacks, it is important to understand how AI/ML tools and techniques have made it easier for hackers to get around current password security methods.
AI/ML tools are enabling hackers to scrape the internet for personal data and find passwords. When combined with social engineering, AI technics can decipher passwords far more quickly than earlier systems. The reality is that AI password crackers can breach most passwords in seconds and more difficult ones in minutes. For example, hackers can attempt millions of possible passwords each minute using AI-driven brute-force attacks that enable hackers to take advantage of password complexity flaws. While longer passwords and phrases make it more challenging, as computational capabilities of AL and ML continue to evolve, those solutions will experience a significant reduction in efficacy.
AI technologies are also negating the cybersecurity value of two-factor authentication. For example, the common use of CAPTCHAs, known as Completely Automated Public Turing test to tell Computers and Humans Apart, are becoming obsolete. AI bots have become so adept at mimicking the human brain and vision that CAPTCHAs are no longer a barrier.
Making CAPTCHAs more complex is not the answer. Cengiz Acartürk, a cognition and computer scientist at Jagiellonian University in Kraków, Poland, says that there’s a problem with designing better CAPTCHAs because they have a built-in ceiling. “If it’s too difficult, people give up,” Acartürk says. Whether CAPTCHA puzzles are worth adding to a website may ultimately depend on whether the next step is so important to a user’s experience that a tough puzzle won’t turn away visitors while providing an appropriate level of security. AI bots are better than humans at solving CAPTCHA puzzles (qz.com)
Another way AI undermines passwords is via the use of keylogging. The use of AI can enable keyloggers to keep track of your keystrokes in order to retrieve your passwords. According to a University of Surrey study, artificial intelligence can be trained to recognize the key that is being pressed more than 90% of the time simply by listening to it. Using an Apple MAC Pro, the group recorded the sound of 25 distinct finger and pressure combinations being used to press each key on the laptop. The noises were captured during a conversation on a smartphone and during a Zoom meeting. A machine learning system was then trained to recognize the sound of each key using some of the data that had been provided to it. The algorithm was able to accurately identify which keys were being pressed 95% of the time for the call recording and 93% of the time for the Zoom recording when it was evaluated using the remaining data. What secrets can AI pick up on by eavesdropping on your typing? (govtech.com)
To combat these attack vectors, Photolok randomizes photos to mediate AI/ML attacks so that AL/ML tools cannot identify and/or learn any patterns, which prevents AI/ML breaches. Photolok uses steganographic photos (random codes hidden in the photo) to hide the attack points from nefarious hackers, while randomly placing the user’s photo on each photo panels to prevent keylogging and other security attack methods. Photolok also blocks horizontal penetrations and defends against external threats, such as ransomware, phishing, shoulder surfing, and man-in-the-middle assaults.