In a previous post I included a blurb mentioning that Verisign reported 7000 EV-SSL certificates were issued in the past two years, which seems fairly low. Recently I was referred to a spot on Verisign’s website where they keep their SSL case studies. Many of their more recently posted studies indicate the positive benefit/increased conversions brought about by using EV-SSL certificates. In other words, switching from standard SSL to EV-SSL certs lead to more sales, all because of customer confidence in seeing the 'green bar'.
For some reason, a mental warning alarm was going off in the back of my head. So I decided to dig deeper into the case studies to see how they actually tested things. After all, a flawed test methodology results in questionable results. I'm a bit of a stickler when it comes to having a solid, repeatable methodology for testing stuff.
In an ideal world, these case studies would have been conducted by subjecting half of their web clients to a standard SSL certificate, and the other half to an EV-SSL certificate. Easier said than done, but I suppose it's possible if you had a web server farm where half of the servers had the EV-SSL cert loaded and the other half didn't, and you used a network load balancer in front of everything to persistently keep the same client/source IP address connection mapped to the same server in the farm. You couldn't use an application level load balancer because that would require terminating the SSL to see the application-layer data, and you wouldn't know to terminate with EV-SSL vs. standard SSL. A completely wrong methodology would be see how many sales you got using the standard SSL certificate, then see how many you got after you upgraded to EV-SSL, and attribute the change to just the EV-SSL certificate. There are too many other factors that could affect a change in sales, such as a slow-down in the economy, a wave of released stimulus checks, a newly-launched advertising campaign, etc.
So how did they test? In the few case studies I read, they simply separated traffic into EV-SSL capable browsers (Internet Explorer 7, Firefox 3) vs. non-EV-SSL capable browsers (Internet Explorer 6, Firefox 2). The idea is that the non-EV-SSL browsers were not showing the green bar, while the EV-SSL browsers were. Then they measured the conversion rate of these two groups, and of course, the EV-SSL browser group had higher conversions. So it must have been the green bar. Or was it?
The core problem with this approach is that the population representation is skewed. The methodology assumes that users of IE7 are no different than users of IE6, which is not the case. The web development community is already largely aware that a significant portion of the remaining IE6 hold-outs (i.e. those who have not upgraded to IE7) are actually corporate users that are running on systems that are not allowed to be upgraded. Think about it: a notable number of home users will leverage Window's standard automatic update feature, and that would have updated them to IE7 long ago. Corporate environments, on the other hand, control the updates and look to have consistent version deployments across desktops. Further, they have a significant investment in web applications tailored to work with IE6 (which has been out for 7 years now...so that's 7 years of web apps that were made just for it); if the choice is to update all internal web apps to work with IE7, or just keep the corporate desktops at IE6, well, the fiscal choice is obvious.
Thus let's frame this scenario a bit differently: given visitors to a non-business-centric retail shopping web site, who is more likely to buy something...people shopping from their home computer, or people shopping from their work computer? Would you normally be tempted to buy a new pair of shoes online using your corporate workstation on your lunch break? Would you perhaps search a little to figure out what you wanted (which seems like an innocent-enough use of the corporate network), and then finish the transaction (i.e. officially make the purchase) later that evening when you got home?
Therefore my biggest complaint about the EV-SSL testing methodology that these case studies use is that the non-EV-SSL browser group (particularly IE6 users) will statistically have more coming-from-their-corporate-workstation users in it, and I'm not convinced that such users are identical to other users in regards to their immediate willingness to make personal retail or pharmaceutical purchases. Personally, I would expect those corporate users to be less-likely to lead to an immediate retail conversion. And that's what the EV-SSL case studies all seem to say/support...but they attribute that phenomenon to the presence or absence of the EV-SSL certificate.
So does that mean EV-SSL is responsible for the higher conversion rate? Not necessarily. Does it mean it's not responsible? Well, not necessarily. To know for sure, all other variables must be approximately equal...and in these situations, there are too many differing factors between the two groups to know in particular which factor is having the most impact. To help put things in perspective, it would be nice to see the same metrics (i.e. conversion rates for IE7 vs IE6 users) reported for sizable retail sites that are not using EV-SSL certificates. I have a hunch that, even without EV-SSL, those sites will still see slightly higher conversion rates for IE7 users compared to IE6 users.
Until next time,