Passwords are terrible for security, and AI can help

The rise of mobile devices in the workplace and companies’ accelerated move to the cloud has opened up a myriad of ways for us to access our virtual workspaces. But as entryways into businesses proliferate, so do more opportunities for digital break-ins. This is why security in the post-2020 world must be redesigned to no longer rely on the construct of trust and traditional forms of authentication such as passwords.

Moving forward, no sign-in should go without suspicion. Instead, we must turn to artificial intelligence to scrutinize digital identities and behaviors in order to verify them.

Today businesses are being called to secure shapeless, collaborative environments, where hybrid and multi clouds prove more resilient and reliable than their brick-and-mortar counterparts. That requires technologies such as AI that can be just as fluid as the times and circumstances we’re living in.

Further complicating the challenge of keeping an organization’s resources and data secure, the “who is who” in a business is changing. The line between who is and who isn’t part of a company’s team is fading away amid the proliferation of remote workforce, the gig economy, and continuous integration of partners into a business’s environment. But in a trustless world where everything is questioned and user personas aren’t easily distinguishable, it’s not enough to be who you say you are; you have to act like it too.

Passwords have long been relied upon as a way to verify one’s identity, but the truth is that digital identities are easy to fake. Behaviors, on the other hand? Not so much.

ADVERTISING

YOU CAN’T TRICK AI WHEN IT COMES TO BAD BEHAVIOR.

I’ve seen time and time again what happens when security trusts identities and passwords. An employee accesses another department’s records through the company’s collaboration tools and downloads hundreds of files of sensitive HR or financial information. But it turns out that the “employee” was actually a malicious actor who got access through a legitimate user’s email and password.
Now, let’s apply the same scenario to a business whose security model used AI to conduct behavioral analysis. The outcome would be vastly different. The AI would detect an anomaly in the impersonated employee’s pattern, flag it, and block further access. It would do this intuitively based on inconsistencies in a mix of attributes, from keyboard strokes and mouse movements to overall work habit patterns such as that specific user’s typical work hours and the types of folders that person accesses or the speed and volume with which files get downloaded.

You can’t trick AI when it comes to bad behavior. If cybercriminals wanted to try and break through the AI bubble wrap around the user, observing the digital habits of that user wouldn’t be enough. They would need to observe the user physically as well, to capture their most nuanced idiosyncrasies.

DON’T TRUST—LET AI VERIFY
With business leaders maniacally focused on adapting to change, the traffic and activity generated by work are changing too—and security has to keep up the pace.

To secure it all, we must be able to make sense of it all. Remote work had led to employees accessing the network from unvetted locations. Times of day previously considered irregular work hours are seeing spikes in activity. Higher volumes of data are flowing through the network, and new devices are connecting to the organization by the hundreds. Without the speed and intuition of AI, we simply won’t be able to contextualize these dynamic behaviors and movements across hybrid cloud environments fast enough.

AI learns in real time and continually evolves based on the data it’s ingesting. It isn’t a static technology, so it can morph in parallel with a business in the midst of change. We don’t need to parse through the millions of potential threats occurring every single day, because AI is constantly analyzing them, verifying their legitimacy—or lack of—and automating a security response.

Ultimately, AI shifts security into proactive gear, creating guardrails to guide users and strengthen a company’s security posture without requiring definitive knowledge of an organization’s digital “floor plan.”

The era in which trusted devices and 14-character passwords were sufficient has come and gone. Now, we must put in place technologies that remove friction and adapt fast—because just as work models will increasingly change, so will cybercriminals’ methods and tactics to break into them. We must allow AI to take charge and thwart cybercriminals’ attempts at innovation in this new environment—providing employees the bubble wrap and guardrails needed to operate securely and avoid risk, no matter the type of work environment they find themselves in.