h1

Security vs. Accessibility

September 27, 2007

An interesting topic, once again inspired by the the folks at twit.  The podcasts on this site are always right on top of whatever might be the latest issue in tech and are invariably on target when it comes to their take on whatever’s going on.  In the case of the show Security Now, the obvious reason for this quality is the presence of Steve Gibson, who seems to be just about the most knowledgeable security guy on the planet.  (he owns  grc.com, and you should know about that place if you have a connection to the Internet). Anyway, on the last show, there was a very interesting talk about the difficulty of taking care of both accessibility and security, something that folks in libraries have to deal with constantly.  In this case, it seems that having any type of interactive form on the web is one place where these two concepts collide.  The problem is that screen readers developed for accessibility purposes (essentially so that people who can’t see a screen can still use the Internet), use pretty much the same technology as robots developed to go out and read screens, and then send you spam (or phishing schemes, or whatever).  These robots scour the ‘Net, and any time they encounter a form, they automatically fill it out and add their own info that they would like to get through to someone … (so, if that form happens to be a blog comment submission form the robot will add the info it wants to post to the blog comments … if it gets through, that info will be posted to the web and will help to boost the visibility of whatever the content is for search engines … do that a million times and suddenly the web seems to be full of info about your product … just one example).  This is precisely why captchas exist … robots, hopefully, can’t read the squiggly words in the box, so only humans who can read them can submit. Unfortunately, since screen readers are required not only by bots but also by those requiring screen reading for accessibility purposes, audio captchas become necessary, and the entire thing becomes more than most can handle.  So, sites that require forms (and in reality huge numbers of sites require forms) often do things like labeling fields with images instead of text, or putting in invisible fields that bots see (and of course, screen readers do as well) but people don’t … so that anyone (bots and screen readers) that fill out those fields are nabbed and tossed out.  On top of all of this, captchas are being beaten about as quickly as they are created (there is now freeware that will answer those “type the squiggly letters in the box above” questions on Yahoo for you quite reliably) … the letters are becoming so squiggly anymore that no human, bot, or screen reader can read them. Tricky stuff … the security people mean well, but their goal is to create secure sites that don’t get flooded with spam … someone really has to come up with a fool-proof way to tell humans from robots on the web … unfortunately, Alan Turing is dead.

Advertisements

2 comments

  1. what an unexpected ending 😐


  2. Yes, I didn’t expect it either … but nobody seems to be coming up with a good way to tell humans from machines. The last person that did was the late Dr. Turing, so I thought of him. I’m thinking of asking some philosophers for the answer.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: