Digital Digest: Putting the “Public” Back Into Library

by
Adrian Versteegh
From the July/August 2011 issue of
Poets & Writers Magazine

On March 22, after nearly two and a half years of wrangling, revision, and federal scrutiny, the Google Books settlement was shot down in the U.S. District Court for the Southern District of New York. The agreement—which emerged from a 2005 lawsuit aimed at the company’s book-scanning project—would have let Google expand its digitization of out-of-print works and begin selling institutional subscriptions to its fifteen million–title database. Citing copyright and competition concerns, judge Denny Chin said that the amended deal still went “too far.” His ruling acknowledged the approximately five hundred outside comments filed with the court, again underscoring, for anyone still out of the loop, the significance of the settlement in current debates over public control of cultural patrimony, the role of corporations in library development, and, more generally, the future of intellectual property.

As Judge Chin noted, the “vast majority” of court submissions came from opponents of the settlement, an array of critics comprising authors and representatives from nonprofits, industry groups (including Google competitors Amazon and Microsoft), public watchdog organizations (such as the American Civil Liberties Union and the Electronic Freedom Foundation), and even foreign governments (with Germany calling the search giant “a serial scanning infringer”). Among the principal issues raised—and the issue that became the ultimate basis for the settlement’s defeat—was its proposed treatment of so-called “copyright orphans,” still-protected works for which rights holders have not been identified. Because the deal applied blanket coverage to U.S. books, dissenting authors would need to explicitly opt out of the arrangement, as some sixty-eight hundred members of the original class-action suit eventually did. While rights holders who came forward later would be entitled to compensation from a $125 million Book Rights Registry, they would lose the right to sue Google for individual acts of infringement. This feature of the settlement prompted opponents to charge that it not only amounted to a “de facto monopoly” over orphaned and out-of-print books, but that it also effectively altered federal copyright law through a simple business arrangement.

Judge Chin appeared to agree, saying that unclaimed works were best addressed through legislation. “The questions of who should be entrusted with guardianship over orphan books, under what terms, and with what safeguards,” he wrote, “are matters more appropriately decided by Congress than through an agreement among private, self-interested parties.” But toward the end of his forty-eight-page decision, the judge suggested that an “opt-in” provision (a work-around initially floated by the Department of Justice when it examined the settlement in 2009) would be more likely to pass muster, hinting at the sequel that will undoubtedly follow.

And the parties are already planning to hit the drawing board again. In a statement issued on behalf of the Association of American Publishers—one of the plaintiffs, along with the Authors Guild, in the suit that begat the settlement—Macmillan CEO John Sargent said that the court’s ruling “provides clear guidance to all parties as to what modifications are necessary.” Scott Turow, president of the Authors Guild (which saw some of its more prominent members, including Ursula K. Le Guin, breaking ranks over the deal) expressed a similar determination, saying that a digital “Alexandria of out-of-print books” was “an idea whose time has come.”

“Readers want access to these unavailable works,” Turow continued, “and authors need every market they can get. There has to be a way to make this happen.” Discussions as to how it might happen resumed earlier this summer.

In the meantime, Google continues to digitize works licensed from some of the world’s largest libraries through its Partner Program—and continues to draw fire for doing so. In April the Center for Democracy & Technology, echoing concerns voiced by other First Amendment advocacy groups, warned of threats to reader privacy posed by corporate e-book purveyors, and called for stronger legislative protections for consumer data. While Google maintains that its digitization program serves the commonweal—and struggles to fathom, according to Steven Levy’s recent In the Plex (Simon & Schuster, 2011), why its “moon shot” enterprise inspires such widespread mistrust—critics of the privatization of knowledge have long been urging the expansion of not-for-profit alternatives. The most successful of these so far have been funded either by grants, donations, and revenue from associated projects (as is the Internet Archive, for example), or else by coalitions of well-heeled universities (as are the California Digital Library and the HathiTrust Digital Library, the latter of which is now hosting research originally intended to be sourced from the Google archive). Perhaps the nearest thing to a publicly supported online archive is the UNESCO World Digital Library, founded by the Library of Congress as a public-private partnership, whose first private partner was—surprise—Google.

But although there’s no shortage of digital libraries in the United States, these collections tend to be dispersed, infrequently or arcanely connected, and generally outside the awareness of most readers. Without access to a major research institution or one of the larger public library systems, it’s difficult for the average citizen to log on to an electronic catalogue and check out, say, a popular novel—as one can now in Norway, for instance. Indeed, it’s overseas that the most robust public library projects are under way: France and Germany have committed to government-funded scanning; Austria and the Netherlands (which is aiming for near-total digitization) have signed deals with Google to host out-of-print works the company has already digitized; and Japan, incredibly, has been at work on an electronic national library since the 1970s. But the most impressive public initiative to date is Europeana, a continent-spanning digital library, archive, and museum that aims to weave local, regional, and national collections into a universally accessible web. In a preliminary report released in January, a “reflection group” convened by the European Union described the plan as “probably the most ambitious cultural project ever undertaken at a European scale, bringing together cultural institutions from different sectors and from all the Member States.”

Now Europeana is contributing to the inspiration behind an equally ambitious cultural project here in the United States. Spearheaded by celebrated book historian Robert Darnton, plans for a Digital Public Library of America (DPLA) are taking shape at Harvard University’s Berkman Center. The initiative took off last December with a $125,000 grant from the Alfred P. Sloan Foundation, and gained clout in May with the addition of three prominent public librarians to its steering committee. Assuming the momentum—and the funding—continues, the group hopes to hold a large-scale event in the fall and to unveil a working demonstration model eighteen months after that. Writing in the New York Review of Books, Darnton acknowledged that the endeavor would need to overcome many of the same hurdles that have stymied the Google Books settlement, including the complexities of author compensation, unclaimed works, and—given that the system would eventually need to cooperate with its foreign counterparts—incommensurate copyright regimes. “But,” he concluded, “the noncommercial character of the DPLA and its commitment to the public good would make all such difficulties look less formidable than they seemed to be when they were confronted by a company intent on maximizing profit at the expense of the public and of its competitors.”         

Adrian Versteegh is a Henry MacCracken fellow at New York University and the editor-at-large of Anamesa.