From e753506431e9c6b649a16225d7309bab89e4349d Mon Sep 17 00:00:00 2001 From: Sunpoet Po-Chuan Hsieh Date: Sat, 12 Nov 2011 16:45:51 +0000 Subject: - Add p5-Parse-HTTP-UserAgent 0.32 Parse::HTTP::UserAgent implements a rules-based parser and tries to identify MSIE, FireFox, Opera, Safari & Chrome first. It then tries to identify Mozilla, Netscape, Robots and the rest will be tried with a generic parser. There is also a structure dumper, useful for debugging. User agent strings are a complete mess since there is no standard format for them. They can be in various formats and can include more or less information depending on the vendor's (or the user's) choice. Also, it is not dependable since it is some arbitrary identification string. Any user agent can fake another. So, why deal with such a useless mess? You may want to see the choice of your visitors and can get some reliable data (even if some are fake) and generate some nice charts out of them or just want to send an HttpOnly cookie if the user agent seems to support it (and send a normal one if this is not the case). However, browser sniffing for client-side coding is considered a bad habit. WWW: http://search.cpan.org/dist/Parse-HTTP-UserAgent/ Feature safe: yes --- www/p5-Parse-HTTP-UserAgent/pkg-descr | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) create mode 100644 www/p5-Parse-HTTP-UserAgent/pkg-descr (limited to 'www/p5-Parse-HTTP-UserAgent/pkg-descr') diff --git a/www/p5-Parse-HTTP-UserAgent/pkg-descr b/www/p5-Parse-HTTP-UserAgent/pkg-descr new file mode 100644 index 000000000000..d9cd06be4326 --- /dev/null +++ b/www/p5-Parse-HTTP-UserAgent/pkg-descr @@ -0,0 +1,17 @@ +Parse::HTTP::UserAgent implements a rules-based parser and tries to identify +MSIE, FireFox, Opera, Safari & Chrome first. It then tries to identify Mozilla, +Netscape, Robots and the rest will be tried with a generic parser. There is also +a structure dumper, useful for debugging. + +User agent strings are a complete mess since there is no standard format for +them. They can be in various formats and can include more or less information +depending on the vendor's (or the user's) choice. Also, it is not dependable +since it is some arbitrary identification string. Any user agent can fake +another. So, why deal with such a useless mess? You may want to see the choice +of your visitors and can get some reliable data (even if some are fake) and +generate some nice charts out of them or just want to send an HttpOnly cookie if +the user agent seems to support it (and send a normal one if this is not the +case). However, browser sniffing for client-side coding is considered a bad +habit. + +WWW: http://search.cpan.org/dist/Parse-HTTP-UserAgent/ -- cgit v1.2.3