Rethinking the Human Subjects Process

Recently I’ve found myself absorbed in various issues surrounding Internet research ethics: the Tastes, Ties, and Time Facebook data release, Pete Warden’s plans to release a database of public Facebook information on 215 million users, etc. To help work through some of these issues — and assist others who are much more qualified than I to figure them out — I’ve been lucky to join Elizabeth Buchanan and Charles Ess on their NSF-funded project to launch the Internet Research Ethics Digital Library, Resource Center, and Commons.

Complementing this new research area of mine, I had the privilege of participating in the first of a pair of one-day workshops intended to discuss challenges related to human subjects approval processes for to Internet-based research on children and learning. The workshop was organized by Alex Halavais, and Jason Schultz, as part of the larger MacArthur Foundation-funded project on Digital Media and Learning, and I was joined by these preeminent scholars and experts: Tom Boellstorff, Heather Horst, Montana Miller, Dan Perkel, Ivor Pritchard, and Laura Stark.

Details of that first meeting have been posted here, and summarized below:

We found that while there might be some fairly intractable issues, as there are for any established institution, some of the difficulties that IRBs and investigators encountered were a result of reinventing the wheel locally, and a general lack of transparency in the process of approving human subjects research. The elements required to make good decisions on planned research tend to be obscure and unevenly distributed across IRBs. From shared vocabularies between IRBs and investigators, to knowledge of social computing contexts, to a clear understanding of the regulations and empirical evidence of risk, many of the elements that delay the approval of protocols and frustrate researchers and IRBs could be addressed if the information necessary was more widely accessible and easily discoverable.

Rather than encouraging the creation of national or other centralized IRBs, more awareness and transparency would allow local solutions to be shared widely. Essentially, this is a problem of networked learning: how is it that investigators, IRB members, and administrators can come quickly to terms with the best practices in DML research?

We are meeting again in August to continue this conversation. If you have suggestions, or your own stories to tell, feel free to drop me a line or comment at the DML Central blog.

Leave a comment