Web based companies have been using A/B and multivariate testing to hone their services since the early days of the internet. One of the most famous examples being Google testing 41 different shades of blue for links.
To me it’s as much a part of the web as HTML or IP addresses. Though I don’t always see it in action I know it’s always there. I’m overtly aware that everything from the colour of a ‘buy’ button to the items in a search results page are probably the product of a multitude of tweaks and tests. In some cases, the button’s colour or those search results may even be based on details about me personally. That’s the colour that other 25-34 year old males responded best to, or other people who also searched for ‘Radiohead tour dates’ clicked on this link.
It’s all part of the trade off. You test and tweak and serve me with different stuff, I get a progressively better web.
Are we all just naive?
Yet, I only recently realised that this awareness gives me a privileged position. When the story about Facebook conducting ‘psychological experimentation‘ hit, my Facebook news feed was flooded with comments from my friends who were outraged. Which surprised me, as other Facebook privacy stories, ones that I have felt far more concerning, have tended not to register. But this one had particularly struck a chord.
As the story continued to unfold into a major PR crisis for the social network, I started to realise that it wasn’t just my non-tech-savvy Facebook friends to whom this was a shock revelation. There were journalists, tech journalists even, to whom the concept of A/B testing appeared to be new.
What confused me though, was why this particular story about Facebook testing had hit the headlines. To me it seemed like just another example of the kind of fiddling I’ve understood Facebook to always have been doing to our newsfeeds.
The clue I think, is in the use of the word ’emotional’. Facebook was supposedly looking for emotional responses to whether it presented either more positive or negative updates from friends in the newsfeeds of a sample of users.
(As a side note; presumably it was an algorithm determining what was ‘positive’ and what was ‘negative’. Meaning sentiment tracking. Notoriously an extremely difficult thing for a machine driven algorithm to do with much accuracy).
To me Facebook’s measurement method for ’emotional’ responses – whether users liked or commented more – indicates that it was pretty much along the same lines of any other newsfeed test. I am very dubious of the idea that that is a solid way of determining an emotional response. But in saying that was what it was looking for, Facebook had clearly crossed a line for people. It seems to represent a jump from technical testing to human testing.
Then we have last week’s OkCupid story, which has flared up the debate again.
In a characteristically charismatic blog post entitled ‘We Experiment on Human Beings!’ the online dating company has explained a number of the tests it has undertaken on the site.
The post seems to be an effort to debunk some of the misunderstanding around how online businesses use testing by explaining a few of the tests OkCupid has undertaken. Instead though, the last experiment it details has apparently caused upset in a similar way to how Facebook managed.
OkCupid uses a ‘match percentage’ to tell users how well they match each other, based on data they have supplied to the site. They say the following about the feature in the post:
It correlates with message success, conversation length, whether people actually exchange contact information, and so on. But in the back of our minds, there’s always been the possibility: maybe it works just because we tell people it does.
The site wanted to test the theory that its ‘match percentage’ could be successful due to a kind of placebo effect. They tested this theory by essentially lying to users, telling bad matches that they matched well, and vice versa.
(The finding was basically yes, there is an element of placebo, but matches work best when they genuinely do match).
Unlike Facebook’s behind the scenes tweaks, this one makes me stop to think twice.
The experiment is actually interesting, from a human behaviour perspective, more so than from a site performance one. But what was being toyed with here was not just algorithms and content that would have existed somewhere anyway, it was actual human connections, potential relationships.
Which begs the question…
Do we need regulation?
As anyone with the fainest of understanding of either law or the web will know, regulating stuff on the internet is not a simple matter. Practically any kind of law that attempts to regulate how companies approach the internet runs into difficulty. Just look at net neutrality, or EU cookie law.
Which means that even if the answer may very well be yes, we might need regulation. But in this still relatively new age of the internet, we don’t have anything like the necessary global legal structure or resources for it to be sensibly enforced.
For now the only answer is to be more aware. Just remember if you are using the web, you may well be playing the part of a rat in a cage.