If one is dealing with hundreds of polylines, would there be a big performance benefit to encoding the polylines using https://developers.google.com/maps/documentation/utilities/polylinealgorithm?hl=sv-SE
It seems that this is mostly used with API v2 and v3 handles large amounts of polylines on its own fairly well?
I can't seem to find any benchmark comparisons.
There is a substantial gain when using google.maps.geometry.encoding.decodePath()
to decode encoded paths when adding a polyling/polygon to a google map. I have a couple of paths with over a thousand points, and instead of looping through each point and creating a LatLng to be added to a Polygon, a simple decode loads visually quicker.
Additionally as Salman pointed out, there can be a substantial gain by reducing network traffic for passing paths via Ajax. Take google's example:
Characters 0 1 2 3 4 5 6 7 1234567890123456789012345678901234567890123456789012345678901234567890 38.5,-120.2|40.7,-120.95|43.252,-126.453 // Polyline Decoded: 40 chars _p~iF~ps|U_ulLnnqC_mqNvxq`@ // Polyline Encoded: 27 chars
With only 3 points we've reduced the the size of the points by 33%.
Not sure if there is any formal benchmark. But encoded polylines become a lot smaller in size compared to actual lat/lng data. I've used it on occasions, specially when updating maps with Ajax.