How to design large complex online communities using social science

Sorry if I jump around a bit in this blog post but by reading these points, and listening to the video, you’ll have a better idea of how social science can help you design a successful community, using a specific kind of moderation approach. Or at least how to impress to use the difference between a theory vs design-type approach to community building to respond better to new customer needs.

OK, I am paraphrasing here so bear with me, with me taking notes from Robert Kraut’s Stanford presentation above. My aim is to show how social science can inform good online community design. So the first point is that Kraut makes that I want to highlight is that real community design is “highly multidimensional”. And that this is at odds with logic of social science which seeks to understand effects of one variable at a time, while all other variables are else held constant, to discover causality. OK, so that’s some of the fundamentals sorted. Skip to this section on the video to hear the explanation.

This social science approach is at odds with (i.e. online community) design where you are trying to figure out the configuration of all possible variables to have the effect that you want to have. Kraut says that basically with design you don’t want one variable at a time you want ‘kitchen sink experiments which are theory-based experiments which you want to try out in a relatively cheap way.

But they use agent based modelling – allow theory to be tested as models in community environment, change member behaviour, which change environment (see 1:12:56) – where the ‘Identity Benefit’ is greater when agent’s interests are similar to group interests:

Here’s how to simply capture that ‘Identity Benefit’:
# viewed messages that match // # viewed messages

In comparison for the other principal type of community benefit to members Kraut identifies, the ‘Bond-based benefit’ is greater when there is repeated interaction. Kind of obvious I guess, but this is social science, so still worth stating!

Agent-based modelling and simulated communities results

And from simulated communities what Kraut found is that the simulated agent models (taking the place of community members) produced results very similar to that observed in real Usenet groups.

So the next step is that if we have a working agent model that shows how community works we can test out different types of moderation techniques, which can test in this simulated community.

From this Kraut found that ‘Personalised moderation’ out performs ‘Community level moderation’, though this really matters significantly when dealing with a large volume of content, or diverse content. In other words ‘Personalised moderation’ works well with large complex communities.

personalised-moderation

And as an example, I see this personalised moderation functionality  appears to be available in community platform Telligent’s latest version of their analytics, which sounds useful. Be good to know which other major community platforms like Lithium offer such beneficial functionality, and how well it really works in the day-to-day:

Your community can now offer its participants dynamic and personalized recommendations of both people and content. Telligent Analytics looks at your community’s data, compares it with each member’s unique interests, and then delivers personalized recommendations to that member. Telligent Analytics doesn’t just tell you how your community’s doing; it applies the analytics to improve your community members’ experience.

So if you want to go into this study applied in more practical detail here’s Robert Kraut’s paper (pdf) with the graphs and stats:

A Simulation for Designing Online Community: Member Motivation, Contribution, and Discussion Moderation – (pdf: 10.1.1.141.6657)

Or maybe you’d like to read the chapter’s of Kraut’s 2012 bookBuilding successful online communities: Evidence-based social design:

  • Resnick, P. & Kraut, R. Introduction [PDF]
  • Kraut, R. E. & Resnick, P. Encouraging contributions to online communities [PDF]
  • Ren, Y, Kraut, R. E. & Kiesler, S. Encouraging commitment in online communities [PDF]
  • Kraut, R. E., Burke, M. & Riedl, J. Dealing with newcomers [PDF]
  • Kiesler, S, Kittur, A., Kraut, R., & Resnick, P. Regulating behavior in online communities [PDF]
  • Resnick, P, Konstan, J & Chen, Y. Starting a community. [PDF]

Getting your brand’s tone of voice right, and making it pay

I like the guide to creating an authentic tone of voice from Rosie Siman on the 360i blog, as it makes the point that consumers find it pretty intuitive to shift how they interact online depending on the online context, while brands find it not so easy shall we say:

In today’s new media landscape, consumers manage a distributed digital identity – one that changes depending on platform, audience and even interest group.

Surprisingly, shifting among these nuanced states isn’t such a feat. It feels natural, even intuitive.

But when brands attempt to do the same, the results can feel schizophrenic and confused.

How to Develop Your Brand’s Social Tone of Voice

Of course on another level part of this is part of a wider issue about how to relate and connect with your customers, which relates to listening to them and understanding what they enjoy. It’s back to that point that we are taught that we think first and feel second, which is fine until you realise how this splits the behaviour of customers from that of brands by and large:

We live in a world where we are taught from the start that we are thinking creatures that feel. The truth is, we are feeling creatures that think.

In turn online consumers “tend to ignore most information available and instead ‘slice off’ a few relevant information or behavioral cues that are often social to make intuitive decisions,” as Brian Solis puts it in ‘The 6 Pillars of Social Commerce: Understanding the psychology of engagement’.

None more so does this distinction appear online when the brand comes over as self-controlled and artificial. So loosen up and inject some real emotion – and then make sure you track the results in your metrics.

It may also help to research tone of voice using a social sentiment package like Radian6 to surface the keywords, and to have an idea of the % positive vs negative sentiment.

Who knows, getting your tone of voice right might even shift the sentiment around your brand, which in turn impacts on conversion (measure, measure, measure). It may start off as ‘just an idea’ but if you can link the tone of voice change to the metrics which connect to the bottom line then you’re onto a winner.

Of course it helps if you have a budget. When I was at Sony we used Netway to carry out MRI-based behavioural research to show the differing impact of email marketing methodologies on consumer responses. Here’s a little taster of their scientific-based approach. I also like their open-source style approach to allowing you to disseminate results too, subject to attribution: