Clay Shirky has a collection of (regularly updated) work about the economics and culture, media and community and open source, on his site (see recommended blogs/links).
One of the documents you’ll find there captures a talk he gave in June 2003 on writing social software – and particularly on dealing with the dangers inherent in large groups forming social networks (read it in full).
Might sound like old news, but with the growing excitement around a 3G revenue revolution around Social-Networking on mobile internet, it's worth considering once again (and it's new to me, anyway!).
Read it in full if you can. If you can't here are a few (very) edited highlights:
Three things you have to accept
1. You cannot completely separate technical and social issues… the group is real. It will exhibit emergent effects. It can't be ignored, and it can't be programmed, which means you have an ongoing issue. And the best pattern, or at least the pattern that's worked the most often, is to put into the hands of the group itself the responsibility for defining what value is, and defending that value, rather than trying to ascribe those things in the software upfront.
2.) Members are different from users. A pattern will arise in which there is some group of users that cares more than average about the integrity and success of the group as a whole. And that becomes your core group, Art Kleiner's phrase for "the group within the group that matters most." In all successful online communities that I've looked at, a core group arises that cares about and gardens effectively. If the software doesn't allow the core group to express itself, it will invent new ways of doing so.
3.) The core group has rights that trump individual rights in some situations. This pulls against the libertarian view that's quite common on the network, and it absolutely pulls against the one person/one vote notion. The people who want to have the discussions are the people who matter. And absolute citizenship, with the idea that if you can log in, you are a citizen, is a harmful pattern, because it is the tyranny of the majority. The core group needs ways to defend itself so that it can stay on its sophisticated goals and away from its basic instincts.
All groups of any integrity have a constitution. The constitution is always partly formal and partly informal. At the very least, the formal part is what's substantiated in code -- "the software works this way."
The informal part is the sense of "how we do it around here." And no matter how is substantiated in code or written in charter, whatever, there will always be an informal part as well. You can't separate the two.
Four Things to Design For
1.) 'Handles' (IDs) the user can invest in.
Anonymity doesn't work well in group settings, because "who said what when" is the minimum requirement for having a conversation. Weak pseudonymity doesn't work well, either. Because I need to associate who's saying something to me now with previous conversations.
The world's best reputation management system is right here, in the brain. And actually, it's right here, in the back, in the emotional part of the brain. If you want a good reputation system, just let me remember who you are. That requires nothing more than simple and somewhat persistent handles.
Users have to be able to identify themselves and there has to be a penalty for switching handles.
2.) You have to design a way for there to be members in good standing. The minimal way is, posts appear with identity. You can do more sophisticated things like having formal karma or "member since."
3.) You need barriers to participation.
It has to be hard to do at least some things on the system for some users, or the core group will not have the tools that they need to defend themselves.
Now, this pulls against the cardinal virtue of ease of use. But ease of use is wrong. The user of social software is the group, not the individual.
The user of social software is the group, and ease of use should be for the group. If the ease of use is only calculated from the user's point of view, it will be difficult to defend the group from the "group is its own worst enemy" style attacks from within.
4.) You have to find a way to spare the group from scale.
Scale alone kills conversations, because conversations require dense two-way conversations. You have to have some way to let users hang onto the less is more pattern, in order to keep associated with one another.
This doesn't mean the scale of the whole system can't grow. But you can't try to make the system large by taking individual conversations and blowing them up like a balloon; human interaction, many to many interaction, doesn't blow up like a balloon. It either dissipates, or turns into broadcast, or collapses. So plan for dealing with scale in advance, because it's going to happen anyway.
The people using your software, even if you own it and pay for it, have rights and will behave as if they have rights. And if you abrogate those rights, you'll hear about it very quickly.
No comments:
Post a Comment