UA gotta be kidding
Brian recounts the sordid messy history of user-agent strings.
I remember somebody once describing a user-agent string as “a reverse-chronological history of web browsers.”
Brian recounts the sordid messy history of user-agent strings.
I remember somebody once describing a user-agent string as “a reverse-chronological history of web browsers.”
Excellent news! All the major browsers have agreed to freeze their user-agent strings, effectively making them a relic (which they kinda always were).
For many (most?) uses of UA sniffing today, a better tool for the job would be to use feature detection.
Even using a strict cookie policy won’t help when Facebook and Google are using TLS to fingerprint users. Time to get more paranoid:
HTTPS session identifiers can be disabled in Mozilla products manually by setting ‘security.ssl.disablesessionidentifiers’ in about:config.
I love Lyza’s comment on the par-for-the-course user-agent string of Microsoft’s brand new Spartan browser:
There must be an entire field emerging: UA archaeologist and lore historian. It’s starting to read like the “begats” in the bible. All browsers much connect their lineage to Konqueror or face a lack-of-legitimacy crisis!
This is fascinating—it looks like there might be an entirely practical reason for Microsoft to skip having a version 9 of Windows …and it’s down to crappy pattern-matching code that’s supposed to target Windows 95 and 98.
This is exactly like the crappy user-agent sniffing that forced browsers to lie in their user-agent strings.
One more reason why you should never sniff user-agent strings: Internet Explorer is going to lie some more. Can’t really blame them though—if developers didn’t insist on making spurious conclusions based on information in the user-agent string, then browsers wouldn’t have to lie.
Oh, and Internet Explorer is going to parse -webkit prefixed styles. Again, if developers hadn’t abused vendor prefixes, we wouldn’t be in this mess.
I got excited when Tim Brown announced this at An Event Apart today: a small JavaScript tool for detecting what kind of rasterising and anti-aliasing a browser is using, and adding the appropriate classes to the root element (in much the same way that Web Font Loader does).
Alas, it turns out that it’s reliant on user-agent string sniffing. I guess that’s to be expected: this isn’t something that can be detected directly. Still, it feels a little fragile: whenever you use any user-agent sniffing tool you are entering an arms race that requires you to keep your code constantly updated.
A nice bit of sleuthing to trace the provenance of one piece of ill-advised user-agent sniffing JavaScript code.
Good luck getting that script updated for the thousands of sites and applications, you say to yourself, where it’s laying dormant just waiting to send devices the wrong content based on a UA substring.
And this is why user-agent sniffing not a future-friendly technique. A new mobile browser comes along, and it has to spoof a fake UA string to all of these sites.
It’s a Red Queen arms race.
Here’s an approach to responsive images in the Expression Engine CMS …but I fundamentally disagree with the UA-sniffing required.
This script adds user-agent information in class names to the document’s root element so that those user agents can be targeted with CSS. It could be useful, but only in direst need.
Social networking for dogs through RFID. Spimy animals FTW!
This is probably the worst possible way to deal with browser inconsistensies. Doesn't anyone else remember forking their JavaScript in the 90s? Don't try this at home.