Dusted Codes

Programming adventures

Effective defense against distributed brute force attacks

Protecting against brute force attacks can be a very tricky task.

Recently I was curious if there are any best practices to protect a website from distributed brute force attacks and I found a lot of interesting solutions:

Lock an account after X failed login attempts

The first method I found was very trivial. If a user reaches a certain limit of failed login attempts the website locks down the account and refuses any further access.

A genuine user can unlock his or her account by requesting a recovery link via email or by changing their password via the password reset function.

Problems with this pattern

  • Introduces a targeted DOS attack surface. An attacker could easily lock out an account by purposefully providing the wrong password several times to either to block an account from using the service entirely or to force the user into a recovery path, where the attacker might have found further vulnerabilities.
  • Doesn't protect against more sophisticated attacks (typically an attacker would pick the most common password an try it on all accounts, then pick the second most common password, etc.)
  • Introduces a potential enumeration attack. An attacker can purposely provide a wrong password and determine if a certain email address/username exists if the account gets locked or not.

Blocking IP Addresses with too many failed login attempts

This one is fairly simple and well known. If a certain IP address had too many failed login attempts, then further access from this IP address is denied.

Problems with this pattern

  • It doesn't help against a distributed brute force attack.
  • Opens the door for another DOS attack.
  • There is a good chance that users behind a shared network will lock themselves out if enough users type in a wrong password within a short period of time.

Whitelist-/Blacklisting IP Addresses

The idea is that a user can limit access to his/her account based on IP address rules. It can be as simple as allowing access to one single IP address, multiple addresses or more complex rules around IP address ranges or subnets.

Problems with this pattern

  • Impractical for most websites or web services.
  • This pattern requires a user to put effort into security configuration instead of being secure by default.
  • Can become a maintenance nightmare.

Increase artificially the login time after each failed attempt

This one I found very creative. Each failed login attempt causes the next failed login request to take longer by a factor X. A successful login will proceed in normal speed at any point of time. This allows a website to throttle a distributed brute force attack while providing good experience for a genuine user.

Problems with this pattern

  • If a genuine user makes a mistake shortly after an attack, they might end up with a long response time.
  • The website ends up unnecessarily wasting threads. This can result in a potential DOS attack again!?

Implement a challenge like a CAPTCHA

This appraoch is trying to stop automated bots from brute forcing an account by implementing a challenge, which supposedly can only be accomplished by a human. Captchas are a very popular solution, but there are many other creative approches to filter humans from machines which work on the same assumption.

Problems with this pattern

  • Bad user experience for the geuine user.
  • Computer learning and social engineering make it a tough challenge to come up with a good filter.

Additional verification step

Digital signatures, two factor authentication and many other patterns require an additional step of verification. They are highly effective against brute force attacks, but have their own down sides and might be impractical for many web services.

Combination of patterns?

Quickly you will find that one pattern on it's own might not do the trick. I tried to think of a good combination of patterns and potential pros and cons attached to them and my best idea was the following:

Monitor the average fail rate and CAPTCHAs

The website determines a natural rate of login failure over a certain period of time. Once this metric has been established it starts monitoring and counting failed login attempts going forward. When the number of failed login attempts significantly deviates from the natural rate then a CAPTCHA will be displayed on all subsequent login requests.

If the rate recovers then the CAPTCHA will be hidden from the login screen again. A very transparent website could even show a notification to the user explaining why the CAPTCHA is being displayed and remind the user to set a strong password if not done yet.

Pros

  • Effective against any type of brute force attack?
  • Good user experience.

Cons

  • Might be difficult to establish the initial variables.

Strict password policy

Another very viable approach is to simply not fight brute force attacks at all. Make sure your users have strong passwords and make brute force attempts rather harmless.

A good password policy is probably a good idea in any case. As always, security comes in layers.

If you know any other effective defense systems against distributed brute force attacks I'd be interested in hearing them!