44 Matching Annotations
  1. Jul 2018
    1. The distinction between openness in practice and openness in content is significant in cost as well. Creating content requires time, effort, and resources and opens up numerous discussions around intellectual property rights. However, openness in practice requires little additional investment, since it essentially concerns transparency of already planned course activities on the part of the educator.

      I appreciate the distinction -- between openness in content and openness in practice. But may disagree on the assessment of their associated costs. I bet the authors' thought on this has also evolved after the MOOC movement.

      In open science, both kinds of openness will incur burden and cost.

    2. Openness as Transparent PracticeThe word open is in constant negotiation. When learners step through our open door, they are invited to enter our place of work, to join the research, to join the discussion, and to contribute in the growth of knowledge within a certain field. The openness of the academy refers to openness as a sense of practice.4 Openness of this sort is best seen as transparency of activity.

      "Openness as a sense of practice"

    1. During the Ideationphase, researchers and their collaborators develop and revise their research plans. During this phase they may collect preliminary data from publicly available data repositories and conduct a pilot study to test their new methods on the existing data. When applying for research funding, they develop the required data management plans, stating where data, workflow, and software code will be archived for use by other researchers. In addition, in some cases, they may decide to preregister their research plansand protocols in an open repository, as has, for example, become common practice in clinical research.

      Annotation remains in 'the dark' in the description of Provocation and Ideation here.

    2. A related principle is that integrating open practices at all points in the research process eases the task for the researcher who is committed to open science. Making research results openly available is not an afterthought when the project is over, but, rather, it is an effective way of doing the research itself. That is, in this way of doing science, making research results open is a by-productof the research process, and not a task that needs to be done when the researcher has already turned to the next project. Researchers can take advantage of robust infrastructure and tools to conduct their experiments, and they can use open data techniques to analyze, interpret, validate, and disseminate their findings. Indeed, many researchers have come to believe that open science practices help them succeed.

      Principle 2 of Open Science by Design. I would fully abide to it. Applies to the argument of open scholarly annotation. It may sound crazy, but it's to make scholarly work easier by creating a linked system for researchers themselves. The infrastructure is not there, not to mention culture. But it was the same with open data.

    3. Theoverarching principle of open science by design is that research conducted openly and transparently leads to better science

      Principle 1 of Open Science by Design

    4. hat is needed to address complex problems is the ability to find and integrate results not only within communities, but also across communities—without paywalls or subscription barriers.Utilizing advanced machine learning tools in analyzing datasets or literature, for example, will facilitate new insights and discoveries.

      Where machine learning kicks in. Lead to machine-generated annotations of scholarly articles to aid human annotations. Something a CMU group is already doing.

    5. Greater transparency is a majorfocus of those working to increase reproducibility and replicability in science(e.g., Munafòet al., 2017).

      Yes, transparency would be an overarching term over reliability and reproducibility.

    6. Ensuring the reliability of knowledge and reported results constitutes the heart of science and the scientific method.

      The key term here -- reliability -- is also ripe for rethinking. So far (and in this report) it's mostly about how we get from data to results. But given known problem with 'the grant cycle', reliability should be broader. Another argument to cover scholarly annotations, which are also data but generated by researchers themselves (sort of like meta science).

    7. The specific ways in which cultural barriers to open science operate vary significantly by field or discipline. Overuse and misuse of bibliographic metrics such as the Journal Impact Factor in the evaluation of research and researchers is one important “bug” in the operation of the research enterprise that has a detrimental effect across disciplines. The perception and/or reality that researchers need to publish in certain venues in order to secure funding and career advancement maylock researchers into traditional, closed mechanisms for reporting results and sharing research products. These pressures are particularly strong forearly careerresearchers.

      Applause: "Building a supportive culture" is the first item suggested by the committee to accelerate progress in open science by design.

    8. •Provocation: explore or mine open research resources and use open tools to network with colleagues.Researchers have immediate access to the most recent publications and have the freedom to search archives of papers, including preprints, research software code, and other open publications, as well as databases of research results, all without charge or other barriers. Researchers use the latest database and text mining tools to explore these resources, to identify new concepts embedded in the research, and to identify where novel contributions can be made. Robust collaborative tools are available to network with colleagues.•Ideation: develop and revise research plans and prepare to share research results and tools under FAIR principles. Researchers and their collaborators develop and revise their research plans, collect preliminary data from publicly available data repositories,and conduct a pilot study to test their new methods on the existing data. When applying for research funding, they develop the required data management plans, stating where data, workflow, and software code will be availablefor use by other researchersunder FAIR (Findable-Accessible-Interoperable-Reusable) principles. In addition, in some cases, they may decide to pre-register their research plans and protocols in an open repository.

      These two components -- provocation and ideation -- are probably most relevant to public scholarly annotation that I am interested in. But they barely touch upon it, because of this document's emphasis on data sharing. Again, this reflects an neglect of value represented in annotations.

    9. In order to frame the issues and possible actions, the committee developed the concept of open science by design, defined as a set of principles and practices that fosters openness throughout the entire research life cycle(Figure S-1).

      This is a useful framework, accompanied by a useful visual that does not convey a linear lifecycle.

    10. To evaluate more fully the benefits and challenges of broadening access to the results of scientific research, described as “open science,” the National Academies of Sciences, Engineering, and Medicine appointed an expert committee in March 2017. Brief biographies of the individual committee members are provided in Appendix A. The committee was charged with focusing on how to move toward open science as the default for scientific research results, and to indicate both the benefits of moving toward openscience and the barriers to doing so.This report presents the findings and recommendations of the committee, with the majority of the focus on solutions that move the research enterprise toward open science.

      Background of this report compiled by the National Academies.

    Tags

    Annotators

    1. Before reading this report, I happened to read a much newer article titled Administrative social science data: The challenge of reproducible research published in Big Data & Society — a journal I reviewed for. There are some clear advancements the social science communities (and scholarly communities in general) have made since 1985. For instance, we have various research tools and platforms available these days to facilitate data management, sharing, and publishing. Git — a version control system highly recommended by this article — was nonexistent when the National Academies Report came out; neither were platforms and initiatives such as Open Science Framework, Harvard Dataverse Network, and Figshare. However, when juxtaposing challenges discussed in both pieces, what stroke me — again — was how slow it has been to shift academic cultures to promote data sharing. Indeed, developing tools are easier, whereas changing cultures at many levels — e.g., in research labs, departments and colleges, institutions, associations, funding agencies — are much much more difficult.

      A blog post when I was attending the Data Sharing workshop organized by AERA and NSF. Another reminder that it's hard work to change culture.

    1. Recommendation 16. Institutions and organizations through whichscientists are rewarded should recognize the contributions of appropriate data-sharing practices.

      Oh man - kinda depressing to see these recommendations put forward in 1985 -- before I was even born. It must have been so hard to bring about cultural changes in the academy.

    2. But there are potential costs for an investigator who provides data toothers: costs of time, money, and inconvenience; fears of possible criticism,whether justified or not; possible violations of trust by a breach ofconfidentiality; and forgoing recognition or profit from possible furtherdiscoveries

      These potential costs of data sharing also apply to the sharing of annotations -- another type of data generated in scholarly processes.

    Tags

    Annotators

    1. National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. https://doi.org/10.17226/2033.

      This report published by National Research Council in 1985.

    1. 4. Use CasesIn order to evaluate and demonstrate thefeasibility of the OAC Data Model, an initialset of use cases has been developed that arerepresentative of a range of common scholarlypractices involving annotation. This preliminaryset is available from the OAC Wiki as OAC UserNarratives/Use Cases2 and includes:-Citation of Non Printed Media-Commentary on Remote Resources-Shared Annotations Across Interfaces-Harvesting, Aggregating, Ranking andPresenting Annotations from Multiple Sites-Annotating Relationships Between MultipleMixed-Media Resources-Annotations which Capture NetchainingPractices-Annotations with Compound Targets

      Use cases that are quite brief. But useful.

    2. In the OAC model, an Annotation is anEvent initiated at a date/time by an author(human or software agent). Other entitiesinvolved in the event are the Content ofthe Annotation (aka Source) and the Targetof the Annotation. The model assumes thatthe core entities (Annotation, Content andTarget) are independent Web resources that areURI-addressable. This approach simplifies anddecouples implementation from the repository.An essential aspect of an annotation is the(implicit or explicit) expression of “annotates”relationship between the Content and theTarget.

      The OAC data model of annotation. Graph-based, interestingly.

    3. . The OACapproach is based on the assumption thatclients publish annotations on the Web andthat the target, content and the annotationitself are all URI-addressable Web resources.By basing the OAC model on Semantic Weband Linked Data practices, we hope to providethe optimum approach for the publishing,sharing and interoperability of annotationsand annotation applications. In this paper,we describe the principles and components ofthe OAC data model, together with a numberof scholarly use cases that demonstrate andevaluate the capabilities of the model in differentscenarios.

      This paper introduces the Open Annotation Collaboration (OAC), which preceded the W3C Open Annotation working group that led the dev of web annotation standards.

    1. y main point, then, is simple: extensive annotation can work in print, primarily because the organizational principles of the medium are firmly established and implicitly understood by most readers. Extensive annota tion in the electronic medium, however, is more problematic. On the one hand, it is extremely tempting to create superannotated editions, bringing a given text together with all its sources, all its commentary, all its reviews, all its illustrations, even all its parodies and film adaptations. But until the conventions of the electronic edition are securely established, too many potential users will find these editions too difficult to navigat

      A fair point about the difficulty with over-annotated electronic documents. This is from a publishing point of view, however. A publisher does not want to have cluttered texts. But have tech evolved far enough to mitigate this difficulty? How would scholars (moving from the poetry reading scenario) respond to the clutter problem? Time to revisit.

    2. But, even more, we realize the necessity of convincing skeptical, techno phobic colleagues of the usefulness of the electronic medium. These are people, on the whole, for whom "nonlinear" modes of thought have little appeal; they sneer at all the hype about hypertext and return to their stud ies or their library carrels to hold in their hands the objects they revere. Such scholars are not simply going to retire or disappear, and we need them, if a market for electronic editions is to develop. They can, with only slight difficulty, navigate a complex scholarly book like my Cornell volume, because they understand the organizational principles of such objects, principles that have gradually developed over a half millennium of print based scholarly editing. But turn them loose in an electronic environment, and they tend to get lost: the conventions of organizing electronic books have yet to be established

      This interesting commentary touches upon how an established culture shapes how we interact text.

    3. is edition will of course have many hypertextual features: the ability to move direcdy from the text to an image of the printed page or from the text to a critical apparatus, the ability to set different versions of poems side by side for purposes of comparison, and even (we are told) simultaneous scrolling of open text windows. But it would be a mistake, I believe, to re gard it simply as a "hypertext," at least in the sense in which promoters and theorists of hypertext have intended the term. We are not interested in "nonlinear" modes of thought; rather, we are intent on providing scholars with evidence that will allow them to draw very "linear" conclusions about this collection of poems. We are not interested in creating a vast, complex web of documents, at the center of which is a Lyrical Ballads poem, but which is so rich in annotation that the poem is buried beneath the weight of its associated texts
    4. he enthusiasm has not subsided?much?but the giddiness has, as we have confronted the practical realities of delivering an actual product. The limitations of software, the awkwardness of SGML markup (not to men tion learning how to do it), the difficulties and costs of digital reproduc tions of manuscripts, and the simple fact that Lyrical Ballads is a very well edited text forced us over and over again to rethink our project and change its scope

      Obstacles introduced by tech as well.

    5. arious layers of annotation, enforces a special discipline on those who at tempt to read the volume: readers are constantly led away from the text and back to it again; they are forced to keep track of different kinds of an notation, sometimes in different parts of the volume. In short, they cannot lightly skim. Annotation, in this respect, is a rhetorical means of impress ing readers with the significance of the poetry, and it is also a means of re habilitating the poetry, by forcing readers to scrutinize Wordsworth's efforts as a translator in unprecedented ways.

      This is quite remarkable. Multiple layers of annotations "forced" on top of poems make the reader engage with poetry in a fresh way. In this case, the value of generate annotations is apparent, at least to the editor.

    6. y challenge, then, was to find reasons why these poems are interesting and to make those reasons apparent. My means for doing so was annota tion. A typical page of my edition of Wordsworth's Aeneid has four bands of text: one containing the reading text of the translation and three in smaller type underneath. The top band in smaller type gives Coleridge's unpub lished notes to the translation, a wonderful find that, to my knowledge, only Robert Woof and Stephen Parrish had examined before I did. The middle band provides the critical apparatus of verbal variants, such as one would find in any variorum edition, and the bottom band contains exten sive annotations about Wordsworth's methods of translation?comparisons between the translation and the Latin, suggestions about ways in which his translation may have been influenced by prose paraphrases and scholarly commentaries, passages in his original poems that allude to the Aeneid, and, of course, obligatory attempts to explain Coleridge's comments. In addition, after the reading text, a lengthy set of editorial notes records Wordsworth's borrowings from four earlier translations of the Aeneid: the translations of John Ogilby (the 1650 edition that Wordsworth owned, which is now in the Wordsworth Library, Grasmere), John Dryden, Joseph Trapp, and Christopher Pitt.

      Four bands of text to help people see (translated) poems interesting. A very different goal than annotating genes.

    7. its editors have maintained that annotations not concerned with tex tual matters add an undesirable layer of clutter to volumes that are already very large and very full.

      Undesired clutter introduced by annotations

    8. It is by doing this public annotation when I realize (again) how privileged I am as an academic affiliated with a big university. There are certain articles that I can access through my university libraries. But when I annotate them with Hypothesis, my annotations become 'orphans' because the articles are not accessible by the general public. This raise questions about what the space is, and does it exist for whom.

    1. 3.1.3.Worfklow ComponentsOne of Taverna’s key values for example is the availability of services to the core system,current figures estimate this to be around 3500 mainly concentrated in the bioinformaticsproblem domain. Taverna has also began to share workflows through the myExperimentproject (21) in order to make such workflows available to the community as a whole.Taverna has a GUI-based desktop application that uses semantic annotations associatedwith services. It employs the use of semantic-enabled helper functions which will bemade available in the next public release of the software. Developers can incorporatenew services through simple means and can load a pre-existing workflow as a servicedefinition within the service palette, which can then be usedas a service instance withinthe current workflow (i.e. to support grouping). Services within the pre-existing workflowcan also be instantiated individually within the current workflow and a developer cancreate user-defined perspectives that allow a panel of pre-existing components to bespecified

      This is an important paper (based on citation numbers).

      It provides a systematic into to workflows in e-science. Similar to another paper I just annotated, it's coming from an engineering perspective. Annotation here plays a lesser role (conceptually) than the annotation I am making right now. Specifically, annotations discussed in such e-science workflows are serving more of a mechanical role (e.g., for perseverance), instead of a more epistemic role.

    1. Scienti c work ows are used by scientists not only as computational units thatencode scienti c methods that can be shared among scientists, but also to specifytheir experiments. In this paper we presented a research object model to captureall the needed information and data including the methods (work ows) and otherelements: namely annotations, datasets, provenance of the work ow results, etc.

      This is interesting and valuable work, focusing on the design and engineering aspects of a (computational-centric) workflow in sciences. There is a lite discussion about value generated by maintaining such a workflow and workflow-centric research objects. I would also appreciate more explanation of annotation activities in the workflow.

    2. A research object normally starts its life as an emptyLive Research Ob-ject, with a rst design of the experiments to be performed (which determineswhat work ows and resources will be added, by either retrieving them froman existing platform or creating them from scratch). Then the research objectis lled incrementally by aggregating such work ows that are being created,reused or re-purposed, datasets, documents, etc. Any of these components canbe changed at any point in time, removed, etc

      Lifecycle of a workflow-centric research object.

    3. Figure 2 provides a more detailed view of the resources that compose work- ow templates and work ow runs. A work ow template is a graph in which thenodes are processes and the edges represent data links that connect the outputof a given process to the input of another process, specifying that the artifactsproduced by the former are used to feed the latter. A process is used to describe aclass of actions that when enacted give rise to process runs. The process speci esthe software component (e.g., web service) responsible for undertaking the ac-tion. Note that some work ow systems may specify in addition to the data ow,the control ow, which speci es temporal dependencies and conditional owsbetween processes. We chose to con ne the work ow research object model todata-driven work ows, as in Taverna [16], Triana [2], the process run NetworkDirector supplied by Kepler [4], Galaxy3, Wings [7], etc.

      This is getting clearer: A workflow template is a graph whose nodes are processes and edges are data links/moves.

      The example from bioinformatics shows that understanding/constructing such a model requires much domain knowledge (e.g., gene stuff). So annotations made in such pathways -- like annotating a gene in a publication -- has domain-specific values not shared by other disciplines.

      This domain specifity is linked to an annotation I made on 'dark data' about the credit system. In bioinformatics, annotating a gene has already been recognized as an important scientific act with value to the field, while in educational research the value of annotation is still to be discovered, debated, and agreed upon.

    4. Figure 1 illustrates a coarse-grained view of a work ow-centric research ob-ject, which aggregates a number of resources

      A sort of UML diagram illustrating relations among different objects in a workflow

    5. our model is built on earlier work on myEx-periment packs [15], which aggregate elements such as work ows, documentsand datasets together, following Web 2.0 and Linked Data principles [18, 17].The myExperiment ontology [14], which forms the basis for our research objectmodel, has been designed such that it can be easily aligned with existing on-tologies. For instance, their elements can be assigned annotations comparable tothose de ned by Open Annotation Collaboration (OAC).

      [Important information:] about the myExperiment ontology framework.

    6. To overcome these issues, additional information may beneeded. This includes annotations to describe the operations performed by thework ow; annotations to provide details like authors, versions, citations, etc.;links to other resources, such as the provenance of the results obtained by ex-ecuting the work ow, datasets used as input, etc.. Such additional annotationsenable a comprehensive view of the experiment, and encourage inspection ofthe di erent elements of that experiment, providing the scientist with a pictureof the strengths and weaknesses of the digital experiment in relation to decay,adaptability, stability, etc.

      Annotation--of various types of objects--plays an important role in scientific workflows, to support reproducibility for instance.

    7. These richly annotation objects are what we call work ow-centric researchobjects. The notion of Research Object has been introduced in previous work[20, 19, 1] { here we focus on Research Objects that encapsulate scienti c work- ows (hence work ow-centric).
    8. Scienti c work ows are used to describe series of structured activities and com-putations that arise in scienti c problem-solving, providing scientists from vir-tually any discipline with a means to specify and enact their experiments [3].From a computational perspective, such experiments (work ows) can be de nedas directed acyclic graphs where the nodes correspond to analysis operations,which can be supplied locally or by third party web services, and where theedges specify the ow of data between those operations.

      A definition of scientific workflow, and an operationalization from a computational perspective. It reminds me of work on orchestration graphs in CSCL. Wondering how much standardization there is and whether standardization of workflows is meaningful at all.

    1. Brooks Hanson, Director of Publications for the American Geophysical Union, summed up the day with a list of goals for a scholarly annotation layer: It must be built on an open but standard framework that enables global discovery and community enrichment. It must support granular annotation of elements in all key formats, and across different representations of the same content (e.g. PDF vs HTML). There must be a diversity of interoperable annotation systems. These systems must be fully accessible to humans, who may need assistance to use them, and machines that will use APIs to create and mine annotations. It must be possible to identify people, groups, and resources in global ways, so that sharing, discovery, and interconnection can span repositories and annotation services. These are lofty goals.

      Quite insightful ideas about a "scholarly annotation layer." Can we claim the existence of such a layer yet? Right now it seems such a layer still operate at the individual level. When there are public ones, they don't talk with other public annotation layers. The goals are quite lofty indeed. That's why it's hard and fascinating.

    2. The goals of the workshop were to review existing uses of annotation, discuss anticipated uses, consider opportunities and challenges from the perspective of both publishers and implementers, converge on a definition of interoperability, and identify next steps. The survey of existing uses began with UCSD’s Anita Bandrowski who presented an overview of SciBot, a tool that’s being used today to validate Research Resource Identifiers in scientific papers. Sebastian Karcher, who works with the Qualitative Data Repository at Syracuse, discussed an annotation-enhanced workflow for sharing, reusing, and citing qualitative data. GigaScience’s Nicole Nigoy presented the results of the Giga-Curation Challenge at Biocuration 2016. Saman Ehsan, from the Center for Open Science, highlighted the role annotation can play when researchers work together to reproduce studies in psychology. Mendeley’s William Gunn described annotation of research data as not merely a supplement to scholarly work, but potentially a primary activity. John Inglis, executive director of Cold Spring Harbor Laboratory Press, envisioned an annotation layer for bioRxiv. And Europe PMC’s Jo McEntyre showed an experimental system that mines text for entities (e.g. biomolecules) and automatically creates annotation layers that explain and interconnect them.

      Diverse usages of annotations by key stakeholders.

    3. As an annotator, I want to be able to assign DOIs to individual contributions, or to sets of them. As an author, I want annotation to integrate with my preferred writing tool so that annotations can flow back into the text.

      As an author, the 2nd user story is natural to me. But it's refreshing to see the 1st user story -- an annotation declaiming DOIs for individual annotations. I was like: Why? Why not?

    1. Scientists currently get credit for the citation of their published papers. Similar credit for data use will require a change in the sociology of science where data citation is given scholarly value. The publishing industry including, for example, Nature and Science is already beginning to provide a solution by allowing data to be connected with publications. However, space limits, format control, and indexing of data remain a major problem. Institutional and disciplinary repositories need to provide facilities so that citations can return the same data set that was used in the citation without adding or deleting records. Standards bodies for the sciences can set up methods to cite data in databases and not just data in publications (Altman & King, 2007).

      Reward and valuation systems are needed to give shared data more credit.

    2. Data becomes dark because no one is paying attention. There is little professional reward structure for scientists to preserve and disseminate raw data. Scientists are rewarded for creating high-density versions of their data in statistics, tables, and graphs in scholarly journals and at conferences. These publications in some ways are the sole end product of scientific inquiry. These products, while valuable, may not be as useful as some authors hope.

      Reward system in place is not rewarding the preservation of dark data.

    3. The data itself is often too voluminous or varied for humans to understand by looking at the data in its raw unprocessed form, so scientists use graphs, charts, mathematical equations, and statistics to “explain,” “describe,” or “summarize” the data. These representational tools help us to understand the world around us. The use of data simplification and data reduction methods in science is repeated at all scales of natural phenomena from the subatomic to the physics of our human scale world, to the function of a cell, a mating behavior of birds, or [End Page 286] the functioning of ecosystems. But these summary representations of data rely on the underlying data, and the published papers do not capture the richness of the original data and are in fact an interpretation of the data. If the dark data in the tail is not selectively encoded and preserved, then the underpinning of the majority of science research is lost.

      Here the article is actually getting in to the scholarly workflow, i.e. data representations generated for publications more visible and accessible than the raw data used to generate them.

    4. We can organize science projects along an axis from large to small. The very large projects supporting dozens or more scientists would be on the left side of the axis and generate large amounts of data, with smaller projects sorted by decreasing size trailing off to the right. The major area under the right side of the curve is the long tail of science data. This data is more difficult to find and less frequently reused or preserved. In this paper we will use the term dark data to refer to any data that is not easily found by potential users. Dark data may be positive or negative research findings or from either “large” or “small” science. Like dark matter, this dark data on the basis of volume may be more important than that which can be easily seen. The challenge for science policy is to develop institutions and practices such as institutional repositories, which make this data useful for society.

      Dark data--an interesting take on the "long tail" of scientific research, which includes those studies conducted by a single or a few scientists without funding.

      If data here is defined more generally--not only as data generated from empirical studies but the actual scholarly process--the idea of dark data would have new meanings. It is not only about the size of a project, but different parts of a project that get more or less recognition. For example, an opaque practice will only reveal the final publication, whereas a more transparent practice would share data, algorithms, etc. But rarely do scientists share how their ideas developed from a mere hunch to a grant proposal and then to a substantial study. Here the idea of dark data could contain data related to processes of scholarly production that do not get talked about, like how I am now annotating this article to develop an idea that's still fuzzy to myself but may (if I'm lucky) grow to something I can not imagine. To me, this is the darker data in scholarly production, beyond empirical data generated by smaller projects.

    1. In this chapter, the authors reflect on the reasons for such hybrids, specifi-cally through an exploration of eLaborate. As a virtual research environment, eLaborate targets both professional scholars and volunteers working with textual resources. The environment offers tools to transcribe textual sources, to annotate these transcriptions, and to publish them as digital scholarly editions. The majority of content currently comprises texts from the cultural heritage of Dutch history and literary history, although eLaborate does not put limits on the kind of text or language. Nor does the system impose limits on the openness of contribution to any edition project. Levels of openness and access are solely determined by the groups of users working on specific texts or editions. This Web 2.0 technology-based software is now used by several groups of teachers and students, and by scholarly, educated, and interested volunteers.

      This chapter describes a tool named eLaborate, "in which scholars can upload scans, transcribe and annotate text, and publish the results as on online text edition which is freely available to all users." On p. 123, there is an interesting critique of how the scholarly workflow has maintained static for almost 2000 years despite tech advancements.