Client-side user agent detection is known to be bad and discouraged in favor of feature detection. However, is it also bad to react differently based on the incoming user ag
It depends. Using the user agent as the sole signal to branch the logic of your server-level code is dubious at best and insecure at worst, but it works for defining the rote capabilities of particular classes of browser and serving content to match their needs when the vanilla agent is supplied.
The scenario you've sketched out is a perfect illustration of this. Attempting to detect mobile browsers and downscale the content you send to them at the server level is entirely appropriate, because you're trying to adapt the user experience to fit their needs better (for example, by providing smaller images and better content flow to fit within the constraints of a smaller screen) while balancing them with the needs of your server (sending smaller images, thus generating less load and less bandwidth over the line). This strategy just needs refinement.
There are a few design principles you should always follow here to ensure your practice of user agent detection isn't seen as dubious by your users:
Always provide the ability to view the full version of your site and plan your load profile accordingly. Otherwise, you will have people attempt to circumvent this by changing their agent.
Always clearly define the modifications of your site content when you create a modal view. This will clear up any FUD surrounding the changes you may or may not have made.
Always provide paths to the alternate versions of your site. For example, use something like http://mobile.example.org for migrating people to the mobile version, making the design-level assumption that when this path is requested, it's been explicitly asked for by your audience.
Reward users for providing their correct agent credentials to you, by offering a better experience for them in terms of content and performance. Users will be happier when you've anticipated their needs and given them snappier performance on the version of the site they're browsing.
Avoid abuse and manual redirection patterns. For example, don't block them with a big horking flyout advertisement for your mobile app when you detect they're running iOS. (Admittedly, this is a pet peeve of mine.)
Never restrict access to areas of the site on a user agent basis (opting instead to sternly warn users about what won't work if they go off the rails and drafting your support policy around it). For example, many of us fondly remember changing our agents for sites "that work best in Internet Explorer," disallowing all other browsers. You shouldn't become one more example of this bad practice if it can be avoided.
In short: providing the correct user agent is a decision by the user. You can use this to define a default experience for users choosing to run their clients plain vanilla or ones that don't know any better. The goal here is to reward your users with not providing a false user agent, by giving them the options they need and the experience they desire while balancing their needs with your own. Anything beyond that will cause them to balk, and as such, should be considered extremely dubious.
You can certainly try to detect the browser by other means, but this is still an area of open research. Browser capabilities and fingerprints change as they compete on features, and attempting to play catch-up to optimize performance is often, currently, intractable.
I concur with this answer on the use of statistical analysis, so don't take me wrong here. But, as someone that actively works in this area, I can tell you there's no magic bullet that will give you perfect classification certainty. Heuristics, however, can and will help you balance load more effectively, and to that end, browser interrogation strategies can and do have use to you once you've clearly defined an acceptable rate for error.