© James Edgeworth 2018
Development Published 02 Oct 2018

POST data is no more secure than GET data

I am writing this blog entry rather reluctantly, and hopefully hardly anyone learns anything by this, but from seeing a number of projects over the years out in the wild, it seems it needs to be addressed.

You should never, EVER assume that POST data is any more secure than GET data.

Let’s start with a simple blog as an example, where users must be registered in order to add comments (heck, it doesn’t matter either way, but the importance here is to avoid the inevitable thought of “turn off guest mode”). The difference here, is that I am going to approach this as someone with a grudge who wants to deface and really give the IT department who owns the website a headache.

  1. I login as my newly registered account, and find a blog entry. Doesn’t really matter which one.
  2. I see the comments form. I inspect the form to see what variables it sets, and which URL it sends it to.
  3. I modify the values in the element inspector, such as a hidden field storing the ID, or the action of the form, if the ID is passed via the URL.
  4. The form submits as usual, but is sending with it a different ID than intended. My entry is now on a different blog post.

Big deal? Ok, let’s step it up a bit. We have an ‘edit’ form for comments, and we repeat the same process. As we are editing, we are looking for a hidden comment id, or we have this anyway in the GET parameters of the form action or page itself. Either way, we just change this to any other ID. We send the data, and our edit instead overwrites someone else’s comment.

Now can you see the problem?

Let’s step it up a bit. We now write a script which takes this form data, and loops through tens of thousands of ID’s. We have now overwritten tens of thousands of comments across the website.

Now let’s go to the extreme. Replace “blog comment” with “product form”, and “comment text” with “price”, “description”, “seller id”, etc. Hopefully now you see how this oversight can become a serious, reputation-destroying nightmare very quickly.

We are not breaking into the system to do this. We are using the forms which are provided to us for the functioning of the website, and simply manipulating a few variables. Because the developer thinks POST data is somehow “locked” to what we intend, the server is accepting the changes.

One of the first rules we are taught in this industry is never trust data from the client side. It seems people confuse this as GET data, because that’s immediately changeable in the URL, but POST data only requires a couple more (easy) steps with tools which come bundled with the web browser.

To be clear, this is not a problem introduced by Ajax as some seem to assume, but of any request. It is not something which a framework's validation library will catch, as they are tailored to validating the fields themselves - making sure an email is an email, a postcode is a postcode, whether a required field is filled in, and so forth. Your firewalls aren't going to go nuts either, as they merely see a HTTP request using your own forms.

Reading data

Accessing data for reading is another related point. It is common for URL’s to expose ID’s, and very easy to change this to access other entities. For example, in theory we have the url:

example.com/profile/{userId}/friends

If the authenticated user does not have permission to view the profile owner’s friends, we hide the url as an aesthetic choice. Unfortunately this is as far as some people go. As you can probably imagine, a user needs to simply take the URL from another profile, and change the ID to the one in the profile they wish to see. The server side MUST check to see if the authenticated user is supposed to see this page (which goes beyond the framework’s security ‘firewalls’, which are general catch-all rules on URL patterns). Another typical case is when an entity has a “hidden” flag of sorts. The front end code will not show the URL (often, the server just won’t send that entry to the frontend) but the user can still change another URL and see what’s there. These siuations are easily resolved with a quick sanity check on the relevant server-side script instead of blindly assuming the front end is playing nice.

The solution

You should think of the server side as it’s own, completely independent part of the website. Forget the front end, even if you wrote it. You are writing a controller action to process data sent via the request, so make sure the data can only be created/updated/deleted by the right people.

The logic is simple. You have the authenticated user. You have the comment entry which presumably knows which user wrote the comment via relations. You know the user roles of the person logged in, and you know who is meant to be able to modify comments. You also know which blog entry the comment belongs to, and by extension, who owns that blog entry. You code therefore needs to ask the following questions:

  1. Does the authenticated user own the comment? If yes, allow.
  2. Does the authenticated user have admin or moderator (or any other comment-editing) role? If yes, allow.
  3. Does the authenticated user own the blog entry in which the comment was posted to? If yes, allow (Assuming a requirement where the blog author can moderate their own comments, just as an extended example.)
  4. All other circumstances, reject.

Those are simple if statements which would avoid a catastrophic disaster. They will vary depending on the entity in question, but you get the idea. 

As a best practice, this authentication checking should be done within the model layer of your application, rather than the controller. That way it doesn’t matter how many fancy ways your server-side accepts form data, the validation is handled (and modified, if needs be) from a handy method (or intricate library - your choice) close to the entity itself.

You don’t have to get too fancy with this. If the user is denied, then your framework likely has a quick method of returning a 403 (forbidden) status. Use it. The user is manipulating client-side variables, so we don’t need to hold their hand through a support system here. Well done if you have built a logging system to catch and report 403’s, 404’s etc.

This is not something to think about once the functionality is built - going through an entire project and retroactively fitting these checks in is no-doubt tedious to implement, and monotonous to test. The functionality isn’t “done” when you can see it working, but is “done” when it stands up to foul play, validates accurately, and allows only what it’s meant to allow.

Forms are mishandled surprisingly often, and no more so than in validation. This is a bit of a side-point, but relevant whilst we are here. It doesn’t matter how sophisticated your front end validation is - fact of the matter is, the data MUST be considered dirty until the SERVER has proved it otherwise.