More thoughts on collaboration and knowledge management

I wrote a post Tuesday about a new collaborative called The Climate Desk that is grabbing much attention in journalism circles.

Ad Age hailed it the "revolutionary" future of journalism. The CJR questioned whether it would work.

I believe that yes, it is, and yes, it will -- but there are still some rough edges that need to be worked out.

Based on my current impressions of The Climate Desk, collaboration primarily takes place at two points in the editorial process:

  1. Brainstorming
  2. Distribution

That makes sense. Those are the easiest two points at which collaboration is possible. But those aren't the most important points. What about all the in-between? Sharing sources, sharing data, reporting together, editing together.

If the collaborative model is going to scale for newsorgs, we need better tools for storing and sharing data.

If I work at newspaper x and I want to work with newspapers y and z about climate change, how would I go about sharing the data I've already collected?

If I wanted to find all the data about climate change based on coverage my newsorg has already done, the process would look like this:

  • Do a Google site search of "climate change" at [mynewsorgsdomain].com
  • Find the dates those articles about climate change were published
  • Go back through some date-structured folder system on my newsorg's server to find contact sheets, notes, drafts of said article
  • Email those files to the other newsorgs collaborating to report on climate change
  • Everyone shares their contacts, someone puts together a Google Spreadsheet to combine the data we found and make something functional out of it -- an overall picture of sorts

This process isn't ideal for finding and sharing data because it doesn't structure that data in a way that would be more usable the next time the newsorg wants to collaborate around climate change. If we collaborate again in six months, I'd have to go back to that spreadsheet, copy the data that is useful for me, then start a new spreadsheet titled "Climate Change Resources Oct. 2010."  It'd be redundant and inefficient.

This ties into Daniel Bachhuber's upcoming discussion for BCNI about knowledge management systems. He asks:

[...] what I mean by this is how news organizations manage all of the data they're privy to that is either stored in structured format or could be stored in a structured format if they had the tools to do so.

I see two of topics that Andrew Spittle brainstormed as being directly related to collaboration:

  • Cross-platform tracking of information
  • Role of KMS in on-going coverage

If we can figure out how to store data in a way that's transferrable across multiple platforms and in on-going coverage, collaboration not only becomes easier, but becomes the next logical step in knowledge management.

A few thoughts:

The structure can't be owned by anyone. It has to be native to the web

There needs to be universal markup for certain kinds of data -- markup that's native to the web like HTML, but not owned by any one brand. I want to be able to tag something as <location ="12.9982348 14.23423423">home</location> and have that data be transferrable to any maping platform, whether it be Google Maps or Mapquest. The same goes for time. I want to be able to tag something as <time ="15:32 PST">time of the event</time> and then be able to filter all data on the web related to that exact minute.

If we have a standardized structure for all types of metadata, then we can begin to organize and reuse that information on a large-scale and in new ways.

A CMS that builds layers of data on top of each other

Crowd Fusion has always stuck with me as a good baseline for a knowledge management system. Crowd Fusion is the CMS built originally for tech product review sites on top of wiki, blogging, RSS and social networking tools. The creators understood that databases are good for information and blogs are good for news, but there's no way of connecting all those pieces. My thoughts when I first discovered the CMS in Sept. 2009:

This CMS created by Brian Alvey reminds me a lot of the CoPress connection engine. The concept is dynamic, combining databases, blogs, RSS, social networks and wikis to give the user an all-in-one experience. I wish a newspaper had developed this software and I wish it was open source. I could see a new direction for newspaper websites. [Update: Apparently now there's an open source beta. Yay]

Built into the CMS are features for both data management and collaboration:

  • Workflow
  • Group feed reader
  • Assignments
  • Database
  • Wiki
  • Team-based permissions
  • Applications that work on top of the data
  • Topic-based user experience

More about it here (worth the watch, I promise): I'd be interested to see a newsorg adopt the software and start to build more interactive applications on top of data generated from back-story research and interviews-- plus combining it with user-generated content and collaborative reporting from multiple newsorgs.

Anyway, that's all I have for now. Let's continue this conversation at BCNI Philly, to which I am hopping on a plane at 10 p.m. PST and arrive in good ol' Philadelphia at 6 a.m. for the 9 a.m. conference. Who needs that sleep thing, anyway? ;)