API Security: More than just a throttling policy

API Management promises a nirvana of exposing data using well-known and simple techniques. Vendors focus on how easy it is to create the APIs and nearly always mention security as part of the API Lifecycle.

Yet, we've all seen the headlines screaming the latest security breach so, what does Security really mean when it comes to API Management?

In this post I try to differentiate the basic policies that all vendors discuss from the many other attack vectors that we need to be aware of.

API Manager security terminology


Most API manage vendors discuss security in terms of OAuth, key management, TLS; these are all valid to the overall problem of security. Most vendors will also have easy to use policies that
throttle requests based on client usage. These throttling policies take the form of stopping an individual client from accessing a particular API over a period of time e.g allow them 20 calls per minute. There are a lot of API vendors out there who don't have any other security policies over and above that so, why do I feel uncomfortable by saying that this is what it takes to have a good API gateway?

Throttling is not enough

If we hark back to "the good ol' days" of SOA and XML attacks there were attacks like SQL injection (SQL statements in constructs that were never meant to have SQL in them but that get executed nevertheless) and XML attack "bombs" (huge arrays of XML tags). 

API Management is little different in this regard - there are still going to be attacks that rely on simplistic APIs that have only been considered from an implementation viewpoint and not a security view. 

Consider the case where an API allows a range of values to be retrieved; let's say min=n max=o. In this case we can see that a request with an extraordinarily high number for "max" could overload the back-end system. Another simple attack is where the payload might contain a String - unless policies are put in place to make sure the string is kept within certain parameters then the back-end could, again, get over whelmed.

Distributed Attacks

Distributed Denial Of Service attacks (DDOS) are an excellent example of where a simple throttling policy simply won't work. In these attacks multiple IP addresses and, potentially multiple different clients, attempt to flood an API. Throttling policies often work on either a single IP address or single client key. It's a far more complicated problem to see that an attack is happening if either of those is not true.

Microservice attacks

Microservice architectures also lead us to find a new problem - if many front-end solutions are using the same back-end services then it is possible for an attacker to be calling entirely different front-end APIs but they may all funnel to one or two back-end systems and thus overwhelm it. 
Although, from the attackers perspective, it is difficult to ascertain what services are vulnerable it's equally as difficult, from the providers perspective, to ascertain that an attack is ongoing given the complexity of the relationships.

Network attacks

Many attacks are still based on the old problems. Such concepts as slow posts (applications holding open connections by sending data in slowly) or login attacks (continuously attempting to login). These are well-known problems and should be handled using the well-known solutions. 

Attack Mitigation

I hope I've shown that the problem of API Management security is one that needs to be considered in much greater depth than just whether the API has a throttling policy in place. Basic network attacks, "old-fashioned" HTTP attacks, content attacks and multi-level DDOS attacks are all in play.

Given this complexity it's clear that one technology and strategy is not going to solve the problem. All the usual network and HTTP firewalls alongside the newer throttling policies have to be in place. However, API Management also needs good security testing to be in place. Something that I suspect we're all guilty of not considering as a prime place we want to spend money! We need to check that our APIs don't have the potential for SQL injection and we have things like range checking in place - and everywhere. 

Once those checks and balances have been put in place the API's then need to be actively managed. Verification of Traffic anomalies needs to be done so that it can be checked that the spike is because of good use-cases, not bad. These checks need to be done all the way through to the back-end services in order to ensure that Distributed attacks are not taking advantage of the complexity of complex architectures.

Conclusion

Security in the API Management era is far more than a simple throttling policy attached to the external API. The interaction of the wealth of external facing APIs with back-end systems needs to be understood to cater for distributed attacks. Active API management will help us see where anomalies are occurring and perhaps need to be fixed. These methods combined with good HTTP and network firewalls should stop most attacks, or at least contain them when they do arrive.

Never scrimp on good up-front testing of the API logic. API-Management focuses on the ability to expose data quickly but with that speed comes just as many security problems as we always had. Make sure that your APIs are designed and tested with all the rigour you would expect from any of the other systems you work with and create good security testing and monitoring solutions around them.

Comments

Popular posts from this blog