May 12, 2009

Blog-Based Peer Review: Four Surprises

by Noah Wardrip-Fruin · , 3:20 pm

Last year we undertook an experiment here: simultaneously sending the manuscript for Expressive Processing out for traditional, press-solicited peer review and posting the same manuscript, in sections, as part of the daily flow of posts on Grand Text Auto. As far as I know, it became the first experiment in what I call “blog-based peer review.”

Over the last year I’ve been finishing up Expressive Processing: using comments from the blog-based and press-solicited reviews to revise the manuscript, completing a few additional chapters, participating in the layout and proof processes, and so on. I’m happy to say the book has now entered the final stages of production and will be out this summer (let me know if you’d be interested in writing an online or paper-based review).

One of my last pieces of writing for the book was an afterword, bringing together my conclusions about the blog-based peer review process. I’m publishing it here, on GTxA, both to acknowledge the community here and as a final opportunity to close the loop. I expect this to be the last GTxA post to use CommentPress — so take the opportunity to comment paragraph-by-paragraph if it strikes your fancy.

An Experiment in Peer Review

When I completed the first draft of this book (in early 2008) things took an unusual turn. I had reached the time in traditional academic publishing when the press sends the manuscript out for peer review: anonymous commentary by a few scholars that guides the final revisions (and decisions). But we decided to do something different with Expressive Processing: we asked the community around an academic blog — Grand Text Auto — to participate in an open, blog-based peer review at the same time as the anonymous review.

Blogging had already changed how I worked as a scholar and creator of digital media. Reading blogs started out as a way to keep up with the field between conferences, and I soon realized that blogs also contain raw research, early results, and other useful information that never gets presented at conferences. Of course, that is just the beginning. I cofounded Grand Text Auto, in 2003, for an even more important reason: blogs can foster community. And the communities around blogs can be more open and welcoming than those at conferences and festivals, drawing in people from industry, universities, the arts, and the general public. Interdisciplinary conversations happen on blogs that are more diverse and sustained than any I’ve seen in person.

Given that digital media is a field in which major expertise is located outside the academy (like many other fields, from noir cinema to Civil War history), the Grand Text Auto community has been invaluable for my work. In fact, while writing the manuscript for Expressive Processing I found myself regularly citing blog posts and comments, both from Grand Text Auto and elsewhere.

The blog-based review project started when Doug Sery, my editor at the MIT Press, brought up the question of who would peer-review the Expressive Processing manuscript. I immediately realized that the peer review I most wanted was from the community around Grand Text Auto. I said this to Doug, who was already one of the blog’s readers, and he was enthusiastic. Next I contacted Ben Vershbow at the Institute for the Future of the Book to see if we could adapt their CommentPress tool for use in an ongoing blog conversation. Ben not only agreed but also became a partner in conceptualizing, planning, and producing the project. With the ball rolling, I asked the Committee on Research of the University of California at San Diego’s Academic Senate for some support (which it generously provided) and approached Jeremy Douglass (of that same university’s newly formed Software Studies initiative), who also became a core collaborator — especially (and appropriately) for the software-related aspects.

Our project started with some lessons learned from examining two earlier, highly influential projects involving the Institute for the Future of the Book. These projects both explore innovative models for online peer review and community participation in scholarly work. The first, Gamer Theory, was a collaboration with author McKenzie Wark. This placed the entire initial manuscript of Wark’s book online, attracting a community that undertook distributed peer review and section-specific discussion with the author. The second project, The Googlization of Everything, is (as of this writing) an ongoing collaboration with author Siva Vaidhyanathan. While Wark’s project was successful, it lacked the high-level flow of conversation through time that has been found a particularly powerful way to engage the public in ideas. In response, Vaidhyanathan is going to the other extreme: researching and writing the book “in public” via a blog sponsored by the institute, so that the entire project is based in conversation through time. As with Wark’s project, attention from the public, media, and scholarly communities has been swift to arrive as well as largely positive.

Nevertheless, I felt there was a feature of these projects that made them both problematic as models for the future of peer review and scholarly publishing. Both of them sought to build new communities from scratch, via widespread publicity, for their projects. But this cannot be done for every scholarly publication — and a number of fields already have existing online communities that function well, connecting thinkers from universities, industry, nonprofits, and the general public. I thought a more successful and sustainable model would bring together: (1) the paragraph-level commenting and discussion approach of Gamer Theory, (2) the emphasis on conversation through time of The Googlization of Everything, and (3) the context of an already-existing, publicly accessible community discussing ideas in that field

This is precisely what we attempted with the blog-based peer review for this book. Further, because we also undertook this project in collaboration with a scholarly press, carrying out a traditional peer review in parallel with the blog-based peer review, we were also able to compare the two forms. I found the results enlightening.

Four Surprises

The review was structured as part of the ongoing flow of conversation on the Grand Text Auto blog. Because the book’s draft manuscript was already structured around chapters and sections, we decided to post a new section each weekday morning. Comments were left open on all the sections — not just the most recent — so while commenting was often most active on Expressive Processing posts from that day and one or two prior, there was also frequently comment activity taking place in several other areas of the draft manuscript. These posts and comments were interspersed, both in time and spatial arrangement on the page, with Grand Text Auto posts and comments on other topics.

There were, of course, a number of outcomes that I expected from the project — the ones that motivated me to undertake it in the first place. I anticipated a good quality of comments, along the lines of those regularly left by Grand Text Auto readers responding to posts that weren’t part of the peer review. I expected those comments would not only be interesting but also in many cases flow directly into possible manuscript revisions. And at a different level, I thought the process was likely to provide another useful data point as people inquire about and experiment with new models of peer review.

All of this took place as planned, but the project held a number of surprises as well.

Review as Conversation

My first surprise came during the review’s initial week. In particular, it was during discussion of a section that was, in that draft, located in the introduction (it has since moved to the fourth chapter). Perhaps it shouldn’t have been unexpected, given my previous experience with blog comments — and my stated goals for the review project — but my assumptions about the structure of a “review” may have been in the way.

I wrote about this in one of my “meta” posts during the course of the review:

In most cases, when I get back the traditional, blind peer review comments on my papers and book proposals and conference submissions, I don’t know who to believe. Most issues are only raised by one reviewer. I find myself wondering, “Is this a general issue that I need to fix, or just something that rubbed one particular person the wrong way?” I try to look back at the piece with fresh eyes, using myself as a check on the review, or sometimes seek the advice of someone else involved in the process (e.g., the papers chair of the conference).

But with this blog-based review it’s been a quite different experience. This is most clear to me around the discussion of “process intensity” in section 1.2. If I recall correctly, this began with Nick [Montfort]’s comment on paragraph 14. Nick would be a perfect candidate for traditional peer review of my manuscript — well-versed in the subject, articulate, and active in many of the same communities I hope will enjoy the book. But faced with just his comment, in anonymous form, I might have made only a small change. The same is true of Barry [Atkins]’s comment on the same paragraph, left later the same day. However, once they started the conversation rolling, others agreed with their points and expanded beyond a focus on The Sims — and people also engaged me as I started thinking aloud about how to fix things — and the results made it clear that the larger discussion of process intensity was problematic, not just my treatment of one example. In other words, the blog-based review form not only brings in more voices (which may identify more potential issues), and not only provides some “review of the reviews” (with reviewers weighing in on the issues raised by others), but is also, crucially, a conversation (my proposals for a quick fix to the discussion of one example helped unearth the breadth and seriousness of the larger issues with the section).

On some level, all this might be seen as implied with the initial proposal of bringing together manuscript review and blog commenting (or already clear in the discussions, by Kathleen Fitzpatrick and others, of “peer to peer review”). But, personally, I didn’t foresee it. I expected to compare the recommendation[s] of commenters on the blog and the anonymous, press-solicited reviewers — treating the two basically the same way. But it turns out that the blog commentaries will have been through a social process that, in some ways, will probably make me trust them more. (Wardrip-Fruin 2008)

In the end, as I will discuss below, I did not have to decide whether to trust the blog-based or anonymous comments more. But this does not lessen the importance of the conversational nature of blog-based peer review. I’m convinced that the ability to engage with one’s reviewers conversationally, and have them engage with each other in this way, is one of the key strengths of this approach. It should be noted, however, that the conversational nature of the blog-based review process also had a less-positive side, which produced my second surprise.

Time Inflexibility

Like many people, the demands on my time fluctuate over the course of weeks and months. I hit periods during the Expressive Processing review when I barely had time to read the incoming comments — and certainly couldn’t respond to them in a timely manner. For anonymous reviews this generally isn’t an issue. When one is required to respond to such reviews, and one isn’t always, it is generally possible to ask for some additional time. It isn’t a problem if a busy period takes one’s attention away from the review process entirely.

But the flow of blog conversation is mercilessly driven by time. While it is possible to try to pick up threads of conversation after they have been quiet for a few days, the results are generally much less successful than when one responds within a day or, better yet, an hour. I hadn’t anticipated or planned for this.

I remember speaking to Wark about his project when we were both at a gathering for the University of Southern California’s Vectors journal — itself a fascinating set of online publishing experiments. He mentioned a number of aspects of his project that I chose to emulate, but at the time I didn’t consider one that I now wish I had pursued: carrying out the participatory review during a period of reduced teaching (a sabbatical, in his case, if I recall correctly).

While I learned much from the Expressive Processing review on Grand Text Auto, I am certain I could have learned more if I had been able to fully engage in discussion throughout the review period, rather than waxing and waning in my available time. Of course, generally pursuing blog-based review with time for full conversational engagement would require a shift in thinking around universities. It isn’t uncommon for authors to request release time for book writing and revisions, yet it has almost never been requested in order to participate more fully in community peer review. I hope that will change in the future.

Comparison with Press-Solicited Reviews

One concern expressed repeatedly about the blog-based review form — by blog commenters, outside observers, and myself — is that its organization around individual sections might contribute to a “forest for the trees” phenomenon. While individual sections and their topics are important to a book, it is really by the wider argument and project that most books are judged. I worried the blog-based review form might be worse than useless if its impact was to turn authors (myself included) away from major, systemic issues with manuscripts and toward the section-specific comments of blog visitors with little sense of the book’s project.

My concerns in this area became particularly acute as the review went on. A growing body of comments seemed to be written without an understanding of how particular elements fit into the book’s wider frame. As I read these comments I found myself thinking, “Should I remind this person of the way this connection was drawn, at length, in the introduction?” Yet I largely restrained myself, in no small part because I wanted to encourage engagement from those who had expertise to offer particular sections, but not time to participate in the entire, extended review process. At the same time, I worried that even those who had been loyal participants all along were becoming less able to offer big-picture feedback (especially after a month had passed since the review of the book’s introduction, where I laid out the overall argument at length). The commenters feared they were losing the thread as well. Ian Bogost, for example, wrote a post on his own blog that reads, in part:

The peer review part of the project seems to be going splendidly. But here’s a problem, at least for me: I’m having considerable trouble reading the book online. A book, unlike a blog, is a lengthy, sustained argument with examples and supporting materials. A book is textual, of course, and it can thus be serialized easily into a set of blog posts. But that doesn’t make the blog posts legible as a book. For my part, I am finding it hard to follow the argument all the way through. (2008)

When the press-solicited anonymous reviews came in, however, they turned this concern on its head. This is because the blog-based and anonymous reviews both pointed to the same primary revision for the manuscript: distributing the main argument more broadly through the different chapters and sections, rather than concentrating it largely in a dense opening chapter. What had seemed like a confirmation of one of our early fears about this form of review — the possibility of losing the argument’s thread — was actually a successful identification, by the blog-based reviewers, of a problem with the manuscript also seen by the anonymous reviewers.

That said, the anonymous reviewers solicited by the press also did seem to gain certain insights by reading the draft manuscript all at once, rather than spread over months. All of them, for instance, commented that the tone of the introduction was out of character with the rest of the book. The blog-based reviewers offered almost no remarks comparing chapters to one another — perhaps because they experienced the manuscript more as sections than chapters. Still, as I will discuss below, they also offered much more detailed section-specific commentary, much of it quite useful, than it would be possible to expect from press-solicited anonymous reviews.

And even though it is an approach to books that the academy views less seriously (you get tenure for a book with an important overall argument), the fact is that many book readers approach texts the same way occasional visitors to Grand Text Auto approached my draft manuscript: in relatively isolated pieces. Especially among academic readers, amusingly enough, there is a tendency to seek all the relevant writing about a particular topic — and then strategically read the sections that relate to one’s own project. I believe it is critical for books to hold up to both kinds of reading (I hope this book manages it) and I see the dual-peer-review process as a good way of getting responses from both kinds of readers.

Generosity with Expertise

It may sound strange to say, but my final surprise is that I find myself with so many people to thank — and so much indebted to a number of them. Undertaking the anonymous, press-solicited review of a book manuscript is already a generous act. (My press-solicited reviewers were particularly generous, offering thoughtful and helpful comments it must have been time-consuming to produce.) But at least the system is designed to provide some acknowledgment for such reviewers: perhaps free books or an honorarium, a curriculum vitae line indicating that one reviews for the press (a recognition of expertise), and the completion of some widely understood service to the field (which is an expectation of most academic jobs).

Participants in the blog-based review of Expressive Processing, on the other hand, received no such acknowledgment. And yet their comments contributed a huge amount to improving the manuscript and my understanding of the field. Further, they contributed things that it would have been nearly impossible to get from press-solicited reviews.

This isn’t because presses choose poor reviewers or because the reviewers don’t work hard. Rather, it is because the number of manuscripts that require review dictates that only a few reviewers should consider each manuscript. Otherwise, the burden of manuscript review would impede the completion of other work.

When only a few reviewers look at each manuscript, each will have some areas of relevant expertise. But for many manuscripts, especially interdisciplinary ones, there will be many topics discussed for which none of the reviewers possess particular expertise. There’s no real way around this.

Blog-based review, for me, created a different experience. Early in the review, when I posted a section about Prince of Persia: The Sands of Time, a topic on which none of my press-solicited reviewers commented, commentary and conversation quickly developed involving three scholars who had written about it (Barry Atkins, Drew Davidson, and Jason Rhody) as well as other blog readers who had played and thought about the game. This happened repeatedly, with humanities and social science scholars, digital media designers, computer science researchers, game players, and others offering helpful thoughts and information about everything from dialogue system interface specifics to statistical AI models.

As the review progressed, deep expertise also showed itself on another level, as the manuscript began to get comments and host conversation involving people who rarely review academic manuscripts: the creators of the projects being discussed. This led to some of the best exchanges in the review process, including people like Scott Turner (Minstrel), Jeff Orkin (F.E.A.R. and The Restaurant Game), and Andrew Stern (Façade). The last person mentioned in the previous sentence, as some readers of this afterword may realize, is also a Grand Text Auto blogger. The mix between blog reader and blog coauthor comments was also present for the creators of examples I touched on more briefly, as with reader Richard Evans (Black & White) and coauthor Nick Montfort (nn).

Some have asked me if the involvement of project authors in the review is likely idiosyncratic, possible only for someone writing on a topic like mine. Certainly on this front, I feel that digital media is a lucky area in which to work because so many of the authors of field-defining projects are still alive (and online). Yet I think the same sort of blog-based review involving project creators could happen for authors in many other areas. For example, during the 2007 Writers Guild of America strike a light was shown on the involvement of many movie and television writers in blog communities. I would not be at all surprised if such writers, already engaged in reading and writing on blogs, took an interest in academic writing that discusses their work — especially as part of a blog-based peer review that might generate revisions before the text is put into print. But only further experimentation will reveal if I am correct in this.

Another question, posed in response to the same “meta” post mentioned earlier, is whether this form of review only works because I have already developed some reputation in my field (e.g., from editing other books). My belief is that my personal reputation is not the primary issue. Rather, it is Grand Text Auto’s reputation that matters. It makes sense to do a blog-based review because we have, in blogs, already-existing online communities that attract university-based experts, industry-based experts, and interested members of the public. The way we use blogs also already encourages discussion and questioning. Of course, widely read blogs won’t want to be completely taken over by manuscript review, but I can imagine them hosting two or three a year, selected for their level of interest or because they are written by one of the blog’s authors.

It is possible that the interest of blog readers would flag under such circumstances. But nothing in my experience points to that. I think there is a hunger, on both sides, to connect the kinds of inquiry and expertise that exist inside universities and outside of them. Blogs are one of our most promising connection points — and blog-based peer review offers one simple way for the two groups to contribute to common work. If my experience is any guide, this can elicit remarkably generous responses. Especially given that I am a public employee (I work for the University of California at Santa Cruz), I look forward to pursuing this type of public connection further in the future.

24 Responses to “Blog-Based Peer Review: Four Surprises”


  1. Planned Obsolescence » Blog Archive » Blog-Based Peer Review Says:

    […] Wardrip-Fruin has posted a thoughtful reconsideration of the experience of putting the manuscript of his forthcoming book, Expressive Processing, through […]

  2. Mark J. Nelson Says:

    I think this particular concern—that books must hold up not only as unitary, sustained works, but also when they’re diced up—is going to get very pressing with easily-searchable books like what Google Books provides. It’s now very easy for me, when writing a paper on, say, The Sims, to read almost every academic mention of The Sims—even ones that last a single paragraph, never mind a chapter.

    I think it then becomes more important for those discussions to all be accurate, even avoiding errors that don’t impact the book’s own argument, because the authority of the author of the book will be cited for these individual propositions within the book. Previously any errors might be buried in a book that wasn’t even on the subject: maybe it’s not so important that a Sims example be exactly right in a book on architecture that just uses it as a passing illustration, so long as it’s right in the ways that are relevant for its use as an architecture example. But now it has to be right enough for the game-studies and AI audiences, too, since any errors will be more likely to be read by the audience you didn’t expect, either damaging the overall academic literature and debate as misinformation gets repeated, damaging the author’s reputation if it’s exposed as misinformation, or both. In fact, I recall your book itself exposed several such wrong-in-the-details examples from other authors’ writing, of various levels of severity.

    And I think, as you discuss, it’s hard to get that sort of cross-cutting feedback from traditional peer review, whether in journals, books, or conferences, because you would need too many reviewers (in your case, an expert on every system discussed, game-studies and digital-humanities experts, AI experts, game designers, etc.). So this is one model for how to get the details right— expose it to everyone, and hope people who know about the details point out problems before the book goes to press. I think there’s a good chance they will in many cases, at least if the authors themselves are alive and reachable: many people have a strong interest in learning how their work is being discussed by others, and would appreciate the opportunity to make corrections or clarifications before something that misrepresents their work gets published.

  3. Noah Wardrip-Fruin Says:

    Mark, I think that’s absolutely right. I had considered that it’s increasingly easy to read books selectively now — but I hadn’t thought of the potentially-much-wider group that would be doing so for any individual title.

    To me this implies that the GTxA peer review experiment was successful not only for the reasons I identified (e.g., a community and a reputation for the blog) but also because it was friendly to some of the near-real-time ways that a wider group of people select things to read on topics that interest them (e.g., Google alerts).

    So, as we think about the future of peer review, it may not only be necessary to design approaches that perform review of individual sections/examples, but also ones that draw in as many of the potentially-interested communities as possible during the time of the review. Even though it wasn’t conceived with the second of these in mind, the blog-based review form responds to both of these, which makes me hopeful that others will experiment with it (or related forms) in the future.

  4. bowerbird Says:

    i think you solve
    this problem with
    a different approach.

    start by giving people
    the book’s _outline_,
    so they understand
    the whole picture…

    then flesh it out…

    -bowerbird

  5. Noah Wardrip-Fruin Says:

    Yes, I think some wider view, to go along with the section-by-section posting, would be helpful. I guess the question is how detailed such an outline should be. A high-level outline can tell people how far along we are, but it can’t answer questions like, “Will this example return later?” (Which people sometimes answer, on paper, by flipping ahead or using the index.) There’s lots of room to experiment.

  6. bowerbird Says:

    i was talking about
    inviting feedback at
    the very beginning,
    when you have just
    composed the outline,
    and nothing more…

    feedback that will
    continue throughout
    the writing process.

    but i now recall that
    you were specifically
    interested in reviews
    that occurred along
    a track parallel to
    peer review, which
    is always done when
    the writing is “done”.

    -bowerbird

  7. Mark J. Nelson Says:

    Getting towards the edge of topics specifically about peer review, but I wonder to what extent these sorts of concerns are part of a partial reversal of the trend towards academic specialization? Many of the forces pushing for it still hold, like the publishing treadmill and academic promotion. But one major force, the need to have some way to manage the flood of publications, is significantly different: it was previously done almost exclusively by hierarchically subdividing fields of knowledge into ever-slimmer niches, but now frequently done by just applying more sophisticated retrieval mechanisms to flat fields of knowledge. I mean, tools like Google Scholar and Google Books don’t even support respecting accepted discipline boundaries if you wanted to, seeing the academic landscape as just one flat bag of PDFs.

    Maybe a change in who peer reviews a book is just one particular kind of change in audience positioning, insofar as the peer reviewers are supposed to be knowledgeable representatives of the intended audience? I suppose that might make an author’s job harder: everyone now has to be right about all fields they discuss, because once someone finds your stuff, the defense that it wasn’t intended for them doesn’t work that well. But it might be a good thing, nonetheless.

  8. Michael Nielsen » Biweekly links for 05/15/2009 Says:

    […] Grand Text Auto » Blog-Based Peer Review: Four Surprises […]

  9. Blog vs. Peer Review Final Report: Lessons Learned « iThinkEducation.net! Says:

    […] of a popular blog to which he contributes peer review the book in public. This week he shared his final conclusions about the strengths and weaknesses of his unusual […]

  10. Permission Publishing with Students « Says:

    […] a recent research paper about ‘blogs’ verses ‘peer review’, Wardrip-Fruin (2009) […]

  11. High Ed Cafe» Blog Archive » Blog vs. Peer Review Final Report: Lessons Learned Says:

    […] of a popular blog to which he contributes peer review the book in public. This week he shared his final conclusions about the strengths and weaknesses of his unusual […]

  12. P2P Foundation » Blog Archive » An experiment in open blog-based peer review Says:

    […] Read a the story here […]

  13. teaching carnival « Bethany Nowviskie Says:

    […] similar lines, Noah Wardrip-Fruin shares four surprises at the end of his year-long experiment in blog-based peer review. Meanwhile, a “three-member […]

  14. Noah Wardrip-Fruin Says:

    Yes, this project was about peer review, but I’m also interested in doing academic work “in public” in a variety of ways. For example, I currently have a number of conference abstracts in submission. If any are accepted, I plan to post them publicly, to get feedback and spark conversation, before writing the full papers. That’s a little closer to posting an outline before a book is written.

  15. Noah Wardrip-Fruin Says:

    While I haven’t been very involved in “future of the academy” debates, I suspect you’re correct that there’s a strong connection here. I understand there has been quite a bit of discussion about thinking of the PhD as something that prepares people for a variety of careers, not just for being a professor, but at least in some disciplines (e.g., in the humanities) students sometimes feel frustrated that the form of the dissertation remains a hyper-specialized one (in many departments) that prepares students for only one kind of post-graduation work.

    I know many PhD students blog, and attempt to engage a wider audience, so maybe a certain pre-dissertation move away from hyper-specialization is already happening. On the other hand, it’s quite risky for grad students (and the untenured) to put significant effort into activities considered marginal by their field’s mainstream. I’ll be interested to see how this evolves (and what I can do to support the experimentally-minded).

  16. RodeWorks » Blog Archive » Academic Peer Review via a Blog Says:

    […] text commenting plug-in (the new 2.x version is out).  Well the experiment is over and he posts his reactions on his Grandtextauto site.  In short he finds benefits both from the traditional process and the […]

  17. Week in Review at The Emerging Scholars Blog Says:

    […] Blog-Based Peer Review — Noah Wardrip-Fruin allowed his book to be part of an experiment comparing traditional peer review with chapter-by-chapter review on his blog, Grand Text Auto. Here, he shares his experience and findings. For example, traditional peer review was better at following the overall argument of the book and comparing one section with another, but the blog comments were much more detailed and collaborative (e.g. commenters would affirm, correct, and nuance criticisms from others). Related posts (automatically generated):Week in Review (Updated) [Editor’s note: This is a new weekly feature from your…Week in Review [Editor’s note: This is a new weekly feature from your… […]

  18. Legal Scholarship & the New Media « Legal Informatics Blog Says:

    […] In response to an interesting discussion on the LIBLICENSE listserv of Prof. Wardrip-Fruin’s Blog-Based Peer Review: Four Surprises and discussion of that article on the Chronicle of Higher Education’s Wired Campus blog, I […]

  19. Examples of Collaborative Digital Humanities Projects « Digital Scholarship in the Humanities Says:

    […] get a sense of the book’s overall arguments when they were reading only fragments, Wardrip-Fruin found many benefits to this open approach to peer review: he could engage in conversation with his […]

  20. Hevel.org: A Chasing after Wind » Blog Archive » An experiment in “open peer review” Says:

    […] the if:book blog, you should definitely check out this post by Noah Wardrip-Fruin about his experiment with open peer review of an academic manuscript via blog […]

  21. The Floppy Hat » Blog Archive » Open Peer Review Says:

    […] we need someone to step out on a limb, put a manuscript up using the proper backend (such as the example Bryan linked to), and see what happens. Perhaps later this year when Google Wave releases someone […]

  22. Introducing Marginalia | JISCPress Says:

    […] CommentPress is already popular in Higher Education for the critique of texts by students, the open peer-review of manuscripts, the peer-review of published books and to solicit comment on Institutions’ policy documents. […]

  23. Expressive Processing Arrives Says:

    […] book includes an extensive set of notes and revisions arising from community comments during the blog-based peer review the manuscript had last year on Grand Text Auto. My sincere thanks, again, to those who shared […]

  24. Grand Text Auto » Expressive Processing Arrives Says:

    […] book includes an extensive set of notes and revisions arising from community comments during the blog-based peer review the manuscript had last year on Grand Text Auto. My sincere thanks, again, to those who shared […]

Powered by WordPress