I have some web services that I am writing and I am trying to be as RESTful as possible. I am hosting these web services using a HTTPHandler running inside of IIS/ASP.NET/S
If the query is too big to go in the URI, turn your query into a resource (like a saved search). I worked on a restful API for a hotel's booking system; the search query had too many params (preferences, rooming list...etc) so I turned it into a resource that I POST to the server. The server then replies with a URI uniquely identifing the search which body is the posted query + its results:
POST http://hotels.xyz/searches
body <search><query>...</query></search>
Response
201 Created - Location: http://hotels.xyz/searches/someID
Body <search><query>...</query><results>...</results></search>
Consider supporting :
- GET requests with short query string
- POST requests with long query string into the body and X-HTTP-Method-Override: GET (https://en.wikipedia.org/wiki/List_of_HTTP_header_fields)
Beware not mixing "POST /orders" that is a bulk creation of new orders and "POST /orders" with a "X-HTTP-Method-Override: GET" that is a search of order.
base64 should do it. other wise use the %-sign which is standard.
There's no perfect way to do this.
The correct HTTP/REST way would be to use a GET and have all your parameters in the URL as query arguments. You've identified two practical problems with this approach
Hopefully you can make the straightforward GET work in your environment. You may even want to consider refactoring your API to make the query data smaller.
But what if you can't make the GET work? You propose several alternatives. I would immediately dismiss two of them. Don't put content in the GET request body; too much software will break if you try that, and anyway it violates the very REST spirit you're trying to capture. And I wouldn't use base64 encoding. It may help you work around problem 1, your server not handling some characters in URLs right. But if applied wrong it will actually make your URLs longer, not shorter, compounding problem 2. Even if you do base64 right and include some compression it won't make URLs significantly shorter, and will make the client much more complicated.
Your most practical solution is probably option 3, an HTTP POST. This isn't RESTful; you should be using GETs for read-only queries. And you'll lose some advantages of the REST approach with caching of GETs and the like. On the other hand it will work correctly, and simply, with a large variety of Internet infrastructure and software libraries. You can then pass as much data you want in the POST body either via multipart/form-data encoding, JSON, or XML. (I've built two major public web services using SOAP, which is just XML on POSTs. It's ugly and not RESTful, but it does work reliably.)
REST is a great design paradigm. It's a guideline. If it doesn't fit your app, don't feel you need to stick with it. HTTP is not good at passing large amounts of data to the server with a GET. If you need have giant query parameters, do something else.
Roy Fielding would likely approve of using POST in this situation, but you'd have to ask him.
In general, most applications that involve user-supplied data being supplied to the server are not safe. The only exception is when the information is in the form of generalized query parameters, for which there is a trade-off between GET and POST that usually involves the size of the parameter content. GET is only desirable for those cases where the parameters can be expressed as a meaningful URI.
I'd definitely have started where you started: URL shortening. I'd try to shorten the parameter names (?a=XXX;b=YYY;c=zzz); Reencode the entire query to Base64; GZip the Base64; Huffman encode the GZip; ... whatever it takes. Once I got the inkling that shortening won't work for all cases (you've got some dynamic filter-creating system that can be added onto indefinitely, or w/e), then you've got to admit maybe trying to do everything within a single request might not work...
I'm NOT going to suggest you throw multiple GETs with split parameters and try to keep track across requests that way...
The only 'robust' method I CAN suggest is to store/set the requested querystring in one request (POST) and have it return a fixed-sized ID (or guid) that identifies the request parameter location in your data store (filterID), then make the actual GET request using the filterID token instead of the full filter query string value. This will allow all kinds of neat things like cacheing responses based on filterID so you could (in theory) reuse the same filters later (instead of re-entering them by hand, just save a "label" along with the filter body and select from the last 5 filters by label), or at least keep them stored with your data so that each time you refresh the page it's not re-sending the entire filter request.