In 2018, LSA outlined four possibilities for the peer review of documentary materials:
- Corpus review: external reviewer(s) are invited to review the materials (similar to a book review)
- Corpus overview article: the curator writes a description of the materials
- Corpus journal: as proposed by Martin Haspelmath and Susanne Maria Michaelis, a corpus-as-article model
- Outside letters of evaluation solicited from senior scholars
One of the issues that remains to be addressed is how assessment of materials would be done. What makes a good documentary record? The early LD literature was largely focused on establishing ideals for documentation and less on how documentary outcomes might be assessed. Some potential criteria could include the following:
- The number of speakers in the record
- The percentage of the total number of speakers in the community
- The proportion of speakers by sex and age
- The number of data collectors and the proportion of data collectors by sex/age/community membership
- The number of hours of audio-visual materials
- The quality of the audio-visual materials
- The distribution of speech genres
- The distribution of interactivity and planning types
- Metadata for each speaker and recording
- The percentage of materials transcribed
- The percentage of materials translated
- The percentage of materials interlinearized
Community Accessibility and Involvement
- Did the community contribute to the creation or evaluation of materials
- Is the archive available through an interface in a language that the community can read
- If community can’t access the archive, were materials distributed to members of the community
Peer review process
Of the options presented by LSA, my favorite is probably #3, except that in Haspelmath and Michaelis’ format it removes the archive from the peer-review process and rather focuses on the assessment of secondary corpora developed using archive materials. So here is my alternative proposal:
- Language archive journal: Curators of archive materials submit to the journal to argue for the value of their materials. Submissions must include required information about the materials, such as the criteria above, as well as prose descriptions of the process through which the materials were created, their potential for research/community use, and any reasons why the materials may not meet some of the standard LD criteria.
As it is currently, archive materials are often only subject to peer review through internal assessments conducted by archives/funding agencies. External peer review would (mostly) remove conflicts of interest and increase the incentives to produce high quality materials.
The only other external peer-review process for archive materials is the DELAMAN award, which is great, but is kind of like having a journal that only publishes one article per year and we all need to compete to be the one who gets published.
There is also some value in allowing for the peer review of archive materials at different stages of development. For example, there could be different submission categories such as early development materials and late development materials.