Archival Theory's Double-Gap: How Decades of Unrealized Professional Vision Clarify the Role of Machine Learning in Archives

I. The Ghost Returns

In the summer of 1989, Randall Jimerson opened an essay in the American Archivist with a literary conceit borrowed from Dickens. The Ghost of Archives Yet to Come led him to a gleaming, air-conditioned room filled with banks of computers and display screens. A brightly colored sign read DATA ARCHIVE. Efficient information processors answered inquiries about university policies, student records, alumni profiles, and faculty publications with a claimed 99 percent satisfaction rate. Then the Spirit led him downstairs to a library basement, where a narrow corridor ended at a steel door with faded letters spelling UNIVERSITY ARCHIVES. Inside, stacks of Hollinger boxes crowded the room. A lone voice asked, hopefully, whether someone had finally come to visit. (Jimerson, 1989/2000, pp. 607–608)

Jimerson's ghost story dramatized a specific kind of threat: not the destruction of archives by an external enemy, but the internal displacement of archival services by an institutional competitor that redefined the same organization's information needs without archival principles. Jimerson argued that the archival profession had committed the a fatal error: defining its business as preserving records rather than serving information needs. The profession operated with a product orientation, building finding aids and hoping users would come, rather than a marketing orientation that would begin with user needs and work backward to determine what services to offer. (Jimerson, 1989/2000, pp. 610–611)

The Levy and Robles study, which Jimerson cited as the most revealing of the profession's self-analysis efforts, held up a particularly unflattering mirror. Resource allocators perceived archivists as "well liked for our passivity... respected for our service, but service is by implication reward enough; we are admired for our curatorial ability, meaning we are quiet, pleasant, and powerless." (Jimerson, 1989/2000, p. 609) This was a self-conception that had internalized the gap between what the profession's theorists prescribed and what practitioners actually did; a gap already visible in 1989, since Jimerson was building explicitly on the work of Elsie Freeman Finch (1984), who had prescribed a user-centered reorientation years earlier. (Jimerson, 1989/2000, pp. 614–615)

The ghost has returned. In 2026, the gleaming Data Archive has a name (ChatGPT, Claude, Gemini) and it answers back. Millions of people now route information needs through large language model systems and treat the results as authoritative. There are a host of real problems with these tools at present ranging from intellectual property concerns through enormous energy and water use. Those issues should be addressed, but even once addressed, there remains a core epistemological issue. The threat Jimerson dramatized has escalated: it is no longer merely possible that an institutional computer center will displace the archives, but that institutions themselves will route their information needs through AI systems that operate without provenance frameworks, without appraisal theory, and without the kind of relational description that makes primary sources intelligible rather than merely findable.

This article concedes the premise. There is no getting the genie back in the bottle, and the archival profession will need to adjust. But the adjustment needed is operational, not theoretical. The profession already possesses the intellectual framework for responsible engagement with computational tools, and that framework is stronger than what corporate AI currently offers. The thesis of this article is that archival theory has been significantly ahead of both available technology and actual professional practice for decades; a "double gap" that the current discourse around artificial intelligence obscures rather than clarifies. Correctly distinguishing large language models from specialized machine learning tools reveals that the profession already possesses the theoretical framework for responsible ML adoption. The bottleneck is recognition.

II. The First Gap: Theory Ahead of Technology

The first dimension of the double gap is temporal: archival theorists articulated visions that the technology of their era could not fulfill. These were not failures of implementation but acts of intellectual foresight; frameworks whose realization required computational capabilities that did not yet exist.

Provenance as Retrieval

In 1985, David Bearman and Richard Lytle contended that the principle of provenance, the profession's most distinctive intellectual tool, had been severely underutilized. Rather than merely organizing records by origin, provenance should be exploited as a retrieval mechanism and, ultimately, as the foundation for a universal information access system serving living organizations. (Bearman & Lytle, 1985/2000, p. 345)

The obstacle was not provenance itself but the profession's implementation of it. North American archival theory had inherited a nineteenth-century, mono-hierarchical view of organizations that could not capture the reality of modern institutions with their task forces, dotted-line reporting, and perpetual restructuring. The record group concept had become so identified with provenance that critics were perceived as rejecting provenance itself. In reality, the record group was a shelf-order system that imposed mono-hierarchical constraints on intellectual structure: since a record can go only one place on a shelf, complex organizational realities were flattened. (Bearman & Lytle, 1985/2000, pp. 348–352)

Bearman and Lytle proposed an alternative: a poly-hierarchical, networked model supporting an inference engine that would execute the reference archivist's inferential process, translating subject queries into organizational activity terms, and from there into specific record series likely to answer the user's question. They acknowledged that "the ultimate system is not in our immediate future," but laid out its architecture with remarkable specificity, including Bearman's proposal that all organizational functions could be captured in fewer than 500 transitive verbs. (Bearman & Lytle, 1985/2000, pp. 356–359) The technology to build poly-hierarchical, inference-capable retrieval systems did not exist in 1985. The archival theory justifying them did. And their suggested 500-verb vocabulary is, at bottom, a structured classification problem which is precisely what a category of machine learning tools excels at.

Find, Not Make: Metadata as Discovery

Eight years later, Margaret Hedstrom extended the logic of provenance-based retrieval into the electronic records environment. Her 1993 article proposed a paradigm shift: from creating new descriptive information to capturing and managing the rich metadata that organizations already generate about their records. Adopting David Bearman's formulation, she argued that archivists should "find, not make" the information in their descriptive systems. (Hedstrom, 1993/2000, p. 390)

Hedstrom identified a fundamental inversion. In the paper world, archivists had too little descriptive information and needed to create more. Electronic records environments generate potentially vast metadata such as directories, data dictionaries, audit trails, and transaction logs. The descriptive paradigm would need to shift from augmenting scarcity to selecting from abundance. (Hedstrom, 1993/2000, p. 391) The challenge was how to choose among it. And the timing problem made the work urgent: decisions made during system procurement and design determined whether adequate metadata would exist at all, leaving few opportunities for archival intervention after the fact. (Hedstrom, 1993/2000, p. 392)

Hedstrom was clear-eyed about the distance between her vision and the profession's capacity to realize it, acknowledging "a large chasm between existing practice and the potential of the electronic era." (Hedstrom, 1993/2000, p. 393) The chasm she named is the double gap in miniature: theory ahead of both technology and practice, visible to the theorist herself. The computational tools her vision required, named entity recognition, classification, and embedding-based indexing, were not available in 1993. The archival rationale for them was.

Appraisal Grounded in Evidence

If Bearman and Lytle reimagined retrieval and Hedstrom reimagined description, Mark Greene in 1998 turned the same critical lens on the foundational archival act of deciding what to keep and what to discard. Greene surveyed the dominant appraisal frameworks of his era and found them wanting: evidential approaches defined records as archival based on their status as evidence of transactions, with no reference to whether anyone actually used the material; functional analysis implied that all institutional functions must be documented; potential-use approaches predicted future research needs without grounding those predictions in evidence of past use. (Greene, 1998/2000, pp. 302–304)

Greene's intervention rested on a philosophical premise he borrowed from Terry Eastwood: if archives are "social creations for social purposes," then use is the only empirical measurement of the value society places on them. (Greene, 1998/2000, pp. 324, 332) He proposed use (actual, measurable patterns of researcher engagement) as the "presumptive determinant" in series-level appraisal, shifting the burden of proof onto arguments for retention rather than deaccessioning (Greene, 1998/2000, p. 334). This was not a mechanistic proposal. Greene's Minnesota Method began with extensive qualitative analysis by studying the state's economic landscape, consulting scholars across disciplines, assessing institutional resources and user demographics before layering in use data as an empirical check. (Greene, 1998/2000, pp. 307–311) The qualitative architecture preceded and framed the quantitative data.

The results were sobering. At the Minnesota Historical Society, accounting records for two railroad companies consumed a full 10 percent of the repository's entire manuscript collection but generated zero percent of use. Greene's approach was, by his own admission, "neither wholly scientific nor completely objective" (Greene, 1998/2000, p. 334), but it had something its competitors lacked: a diagnostic element that watched outcomes and could detect its own errors. The tools to gather and analyze use patterns computationally did not exist at the scale his vision implied. The evaluative framework did. And the startling results of his limited studies revealed exactly what computational analysis could detect across entire institutional holdings; that a large holding was generating no use, for example.

III. The Second Gap: Theory Ahead of Practice

The first gap, theory ahead of technology, should be expected in any field where intellectual ambition outpaces available tools. The second gap is more troubling: even where technology could support theoretical aspirations, the profession has not fully implemented what its own literature prescribed.

The Consistency Crisis and the Standards That Followed

In 1987, Avra Michelson published the results of a controlled experiment that should have been a wake-up call. She asked 40 repositories inputting into the Research Libraries Group’s (RLG) Research Library Information Network (RLIN) Archival and Manuscript Control (AMC) database to assign topical index terms to identical collection descriptions. For one collection, 21 indexing repositories assigned 162 different access points. Not a single term was chosen by all indexers, resulting in a consistency rate of zero. Even when terms were collapsed to their most generic roots, consistency remained zero. (Michelson, 1987/2000, pp. 363–364) The major finding was blunt: standard conventions had not produced standard practice.

The underlying problem was structural. Archives had automated through bibliographic utilities because no archival descriptive standards existed for the automated environment. Library cataloging which were systems designed for discrete bibliographic items had supplanted customary archival description, producing what amounted to square pegs forced into round holes. (Michelson, 1987/2000, pp. 362–363)

A decade later, Daniel Pitti documented the profession's most ambitious response. Encoded Archival Description (EAD) was developed as a community-owned, standards-based encoding structure built on Standard Generalized Markup Language (SGML) and its simplified subset, Extensible Markup Language (XML). Pitti explicitly rejected two alternatives: Machine-Readable Cataloging (MARC), for its size limitations, inability to accommodate deep hierarchical structure, and small market base for software development; and HyperText Markup Language (HTML), for its procedural, display-oriented markup that lacked the semantic depth necessary for sophisticated searching and navigation. (Pitti, 1997/2000, pp. 403–404) EAD was designed to complement MARC, completing a three-tiered access architecture: collection-level catalog records at the top, detailed finding aids in the middle, primary source materials at the bottom. Pitti warned against abandoning this standards-based approach in favor of "ephemeral digital fashions." (Pitti, 1997/2000, p. 401)

The standard was built. Community governance was established. But the practice gap was visible even during adoption. Repositories were initially reluctant to share their finding aids for fear of professional judgment. Training workshops had to be developed from scratch. Participating repositories needed what the American Heritage Virtual Archive Project called "an acceptable range of uniform practice" negotiated before they could apply the standard consistently. (Pitti, 1997/2000, p. 411) Decades later, many repositories still lack EAD finding aids not because the standard failed, but because the resources and institutional will to implement it remained scarce. The theory-to-practice gap persists independent of the theory-to-technology gap.

Users Known but Not Served

The consistency crisis was a problem of how archives described their holdings. A parallel problem existed: for whom are holdings described? Between 1984 and 1994, a series of articles diagnosed with increasing precision the profession's failure to understand and serve its actual users.

Elsie Freeman Finch opened this line of critique in 1984 by identifying four misassumptions on which archival administration rested: that the profession was oriented toward users, that it knew who those users were, that it understood the nature of research, and that it provided adequate help in doing it. All four, she argued, were false. Drawing on evidence from multiple studies, Freeman Finch demonstrated that archival users were overwhelmingly non-academic. They were genealogists, bureaucrats, filmmakers, lawyers, city planners, and private individuals pursuing personal interests. Even explicit efforts to attract scholars, such as the Illinois State Archives' Descriptive Inventory, failed to increase scholarly use. The finding aids the profession labored to produce were, Freeman Finch observed, "at best intramural communications written by one archivist to be read by another, not by a user." (Freeman Finch, 1984/2000, p. 425) The profession had built its entire apparatus around a minority user who often did not use what was built for them, while the actual majority of users were treated with indifference or hostility. (Finch, 1984/2000, pp. 419–423)

Two years later, Paul Conway provided the methodological framework the profession lacked. He argued that the continuing reluctance to study users was "not so much... a problem of will as a problem of method." (Conway, 1986/2000, p. 435) His framework organized user studies around three objectives, Quality, Integrity, and Value, each assessed across five methodological stages of increasing sophistication. Conway drew a critical distinction between physical use of materials and usefulness: the downstream impact of archival information on individuals, groups, and society. (Conway, 1986/2000, pp. 436–437) His closing line captured the reorientation at stake: "Making the reference room rather than the loading dock the hub of archival activity requires facts about users — recorded facts, shared facts, but most of all facts organized for clear objectives." (Conway, 1986/2000, p. 448)

A decade after Finch's original diagnosis, Elizabeth Yakel and Laura Bost Hensey published a study of administrative users in university archives that confirmed the persistence of the patterns Finch had identified. Their findings were striking. Administrators, who comprised 30 to 41 percent of total reference requests and were often the primary user constituency, used archivists as intermediaries in a way that this article terms the "Oracle" model. They extracted answers without engaging with the evidentiary record. Not a single interviewee had ever used archival inventories, card catalogs, or finding aids. None expressed significant concern about the reliability of the information they received; confidence in the professional archivist was treated as equivalent to confidence in the information itself. The archivist both defined the research question and conducted the search; a dual mediation that captured very little unfiltered information about actual user needs. (Yakel & Bost Hensey, 1994/2000, pp. 467–469)

The practice gap is visible across a ten-year span within the literature itself. Finch prescribed the user-centered reorientation in 1984. Yakel and Hensey documented the same patterns in 1994. The same diagnosis, the same prescription, the same partial implementation. And the Oracle model persists today except that corporate large language models now replicate it at global scale, without the archivist's custodial training or contextual knowledge. Where the archivist-as-Oracle at least possessed provenance knowledge and professional judgment, the LLM-as-Oracle operates without either.

Competing Traditions as Structural Explanation

If the practice gap were solely a resource problem (too few staff, too little funding, too many boxes) it might be addressed by better advocacy or more efficient workflows. Luke Gilliland-Swetland's 1991 article suggests a deeper structural explanation.

Gilliland-Swetland challenged Richard Berner's influential claim that the American archival profession achieved consensus around provenance by the 1950s. This consensus, he argued, was illusory. Both the historical manuscripts tradition and the public archives tradition had adopted provenance but for fundamentally different reasons. Margaret Cross Norton adopted provenance because it established the legal authenticity of records. Historical manuscripts repositories adopted it because it illuminated historical context for scholarship. (Gilliland-Swetland, 1991/2000, pp. 132–133) Same practice, different values.

Beneath the surface agreement lay two competing conceptions of professional identity: the archivist as historian-interpreter, committed to the humanistic tradition and the interpretive relationship with documents, versus the archivist as administrator-custodian, committed to scientific management of records for administrative and public accountability needs. (Gilliland-Swetland, 1991/2000, pp. 126–127) These were not merely disagreements about method; they were disagreements about mission. The result was three decades of recurring debates — over certification, education requirements, the relative importance of history training, the role of interpretation — that the profession misdiagnosed as personal obstinacy rather than structural tension. When people believe consensus exists, disagreement feels like personal failure, and productive dialogue becomes impossible. (Gilliland-Swetland, 1991/2000, pp. 139–141)

This structural explanation matters for the present argument because the adoption of machine learning tools will encounter the same issues. The historian-interpreters will tend to see ML tools as threats to the interpretive relationship with documents. The information managers will tend to see them as efficiency tools for custodial workflows. Neither frame, on its own, is adequate. This article argues for a third position that draws on commitments both traditions share: ML tools governed by archival principles, preserving professional judgment while operationalizing the theoretical aspirations that the profession's competing camps have articulated from different starting points for more than a century.

IV. The Bridge: Disciplined ML as Theoretical Fulfillment

The preceding sections have established two gaps. Archival theorists envisioned capabilities that the technology of their era could not deliver: poly-hierarchical retrieval, automated metadata capture from abundance, and empirically grounded appraisal. And the profession did not fully implement what its own theorists prescribed, even where existing tools would allow. The bridge between these two gaps is a category of computational tools that the current AI discourse has obscured by collapsing it into the undifferentiated term "artificial intelligence."

The Critical Distinction

The tools generating the most public anxiety (ChatGPT, Claude, Gemini, and their successors) are at their root large language models. They generate fluent text by predicting probable sequences of words based on statistical patterns in massive training corpora. They can be useful as semantic exploration aids, helping a researcher formulate questions or survey unfamiliar terrain. But they are not factual authorities. They provide access without contextual scaffolding, and access without contextual scaffolding produces the illusion of knowledge. This is precisely the problem Conway identified in 1986 when he distinguished physical use from usefulness, and it is the problem Hardman diagnosed in 2026 when she described the gap between findability and intelligibility; the difference between a document that is searchable and a document that is understandable. (Hardman, 2026)

Even the companies building these systems recognize the problem. Recent iterations of commercial LLMs increasingly incorporate retrieval systems, citation mechanisms, and grounding tools; attempts to back-fill traceability into architectures that were not designed to provide it. But retrofitting provenance onto a system built for fluency is fundamentally different from building tools that operate within provenance frameworks from the start. The former is a patch; the latter is a design principle.

A different category of machine learning tools, what this article terms "disciplined ML," operates on entirely different principles. Named entity recognition, embedding-based retrieval, clustering, classification, and transcription tools are purpose-built for specific tasks, trainable on domain-specific data, and deployable by institutions with modest technical capacity. They do not generate text. They identify patterns, extract structure, and organize information according to rules that can be inspected, adjusted, and governed by professionals. The fear the profession rightly feels is about LLMs. The opportunity the profession is missing is about a different category of tool entirely.

Hardman (2026) captures this misdirection precisely. The profession's hostility toward AI is professional memory, not ignorance; evidence that the field has learned what happens when technology arrives as substitution rather than service. But the profession is rejecting the messenger while missing the opportunity the message contains.

Tool-to-Theory Correspondence

The most significant claim of this article is that specific disciplined ML tools correspond directly to specific theoretical problems the archival literature has already diagnosed. This is the critical move; the one Hardman (2026) noted she deliberately did not make, and the one that transforms the thesis from aspiration to demonstration.

Named entity recognition addresses the access point problem Bearman and Lytle (1985/2000) identified: the need to extract organizational names, personal names, functions, and activities from records as retrieval access points rather than relying on the archivist to assign them manually against an inconsistent authority structure. Embedding-based retrieval addresses the inferential process Bearman and Lytle described (the translation of subject queries into organizational activity terms across poly-hierarchical structures) as well as the recall retrieval problem Michelson (1987/2000) identified, where archival users need exhaustive results rather than precision results. Clustering addresses the challenge of imposing intellectual order on large, heterogeneous collections. And classification with mission-based priority overlays addresses Greene's (1998/2000) call for use-informed appraisal, enabling repositories to detect the patterns his limited studies revealed (such as 10 percent of holdings generating zero percent of use) across entire institutional holdings.

These correspondences are not coincidental. The archival literature diagnosed problems whose solutions require exactly the kind of structured, domain-specific, governable computational processing that disciplined ML tools provide. The profession did not need to wait for Silicon Valley to invent the right intellectual framework. The framework was already there.

A vulnerability must be acknowledged. These tools are trained on corpora, encoding relationships and optimizing for definitions of similarity that reflect the assumptions of their training data. The sovereignty argument that follows requires addressing governance of the tools themselves, not merely their outputs.

The Sovereignty Argument

Linda Henry's 1998 article on electronic records provides the governing principle. Henry's argument was never against technology. The first generation of electronic records archivists she defended were themselves technologists who embraced new media while maintaining archival principles. What she opposed was the removal of independent professional judgment from the custodial chain. Concentration of control in entities whose primary mission is not preservation or public accountability is the issue. And while the issue is to be taken seriously, the work of preserving, linking, and providing information through archives will endure because of the core functions, not in spite of it. As Henry concludes, endorsing Peterson, "The traditional archival principles — evidential and informational values, provenance, levels of arrangement and description — continue to undergird archival practice. That practice will grow and change, but the principles will endure." (Henry, 1998, pp. 587–588)

Hardman (2026) confirms this principle from the contemporary vantage point. If archivists do not shape how AI intersects with archival records, other systems will, privileging what is already well-described and producing a quiet drift toward invisibility for primary sources in the scholarly ecosystem. Her findability-versus-intelligibility gap is the diagnostic frame for what corporate LLMs systematically fail to provide: the relational context that makes a document not merely searchable but understandable.

The sovereignty position, then, is not to reject ML tools but to insist that computational workflows be governed by archival principles: transparency about training corpora, reversibility of outputs, and professional review of machine-generated relationships. This means the profession should acknowledge current tool limitations, use available tools now to begin processing, preserve raw data alongside computational outputs, and reprocess iteratively as better tools are developed with professional input. This mirrors how archivists already think about description as revisitation rather than single-pass completion. Preserving raw data and computational outputs alongside refined results is original order and provenance principles applied to computational workflows.

V. The Payoff: From Access to Advocacy

If the argument of this article is correct, that disciplined ML tools can operationalize what archival theorists envisioned but could not achieve with the technology of their era, then the practical consequence is not merely better processing or faster description. It is a transformation of the profession's relationship with the public it serves.

John Grabowski made this argument concretely in 1992, drawing on Theodore Karamanski's observation that the historical profession is too small and too fragmented for any single subdiscipline to influence resource allocators on its own. Traditional outreach had not moved the public to general awareness. What the profession needed was not education but users, because someone who uses the product is more likely to value the industry that produces it. (Grabowski, 1992/2000, pp. 620–621)

Grabowski demonstrated the chain empirically: users become advocates, advocates become funders, funders ensure survival of the heritage. At the Western Reserve Historical Society, genealogist partnerships generated volunteers, microfilm purchases, and an estate gift that funded the entire U.S. Census on microfilm. The Ohio Genealogical Society successfully pressured the state legislature to protect access to vital records. National History Day introduced students to primary source research, with a WRHS-based project winning a national first-place award. Community collecting partnerships with African-American and Jewish communities produced not only richer collections but endowed curatorial positions and major foundation grants. Traveling exhibitions reached audiences far beyond repository walls. A mall exhibit attracted over 10,000 weekly viewers, compared to 1,300 in the museum gallery. (Grabowski, 1992/2000, pp. 622–627)

Wherever user-centered strategies were actually implemented, they worked. The principle endures even as the engagement mechanism shifts from physical exhibition to digital discovery which are arguably more powerful, since digital access scales in ways mall exhibitions never could. Grabowski's 10,000 weekly viewers could be a commonplace occurrence when the exhibit is a searchable digital collection accessible worldwide.

If disciplined ML tools operationalize the access and engagement the profession has called for since Finch in 1984, they generate the constituency pipeline Grabowski demonstrated. The profession does not need to play a different game. It already has the theoretical substance, the principled frameworks, and the empirical evidence that engagement produces advocacy. The tools are finally adequate to execute on what the theory has always demanded.

VI. Conclusion

The Ghost of Archives Yet to Come has returned. But the lesson of Jimerson's parable is not that the ghost is inevitable. The Data Archive, the gleaming room where efficient processors answer questions without archival principles, is what happens when someone else builds it. The profession building its own version, governed by its own theory, is not capitulation to the ghost. It is exorcism; it is engagement.

The double gap this article has described is real. Archival theorists envisioned poly-hierarchical retrieval, automated metadata capture, and empirically grounded appraisal before the technology to achieve them existed. The profession did not fully implement what those theorists prescribed, even where existing tools would allow. But the double gap, honestly diagnosed, is a gift: it tells the profession exactly where to act, and it confirms that the intellectual work has already been done.

The bottleneck is recognition. The archival profession does not need to invent a new theoretical framework for the AI era. It needs to recognize that the framework it built across four decades of rigorous scholarship already contains the principles that should govern computational workflows in archives. It needs to distinguish between the large language models generating justified anxiety and the disciplined ML tools that can operationalize long-standing theoretical commitments. And it needs to insist, with the confidence that comes from decades of intellectual preparation, that these tools operate under archival governance, preserving the professional judgment that Henry defended, the relational context that Hardman demands, and the user-centered engagement that Finch, Conway, Jimerson, and Grabowski have been calling for since before most of these tools were imagined.

The moment for recognition is now.

References

Bearman, D. A., & Lytle, R. H. (2000). The Power of the Principle of Provenance. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 345–360). Society of American Archivists. (Original work published 1985 in Archivaria, 21, 14–27)

Conway, P. (2000). Facts and Frameworks: An Approach to Studying the Users of Archives. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 433–448). Society of American Archivists. (Original work published 1986 in American Archivist, 49, 393–407)

Freeman Finch, E. (2000). In the eye of the beholder: Archives administration from the user's point of view. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 417–431). Society of American Archivists. (Original work published 1984 in American Archivist, 47(2), 111–123)

Gilliland-Swetland, L. J. (2000). The Provenance of a Profession: The Permanence of the Public Archives and Historical Manuscripts Traditions in American Archival History. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 123–141). Society of American Archivists. (Original work published 1991 in American Archivist, 54, 160–175)

Grabowski, J. J. (2000). Keepers, Users, and Funders: Building an Awareness of Archival Value. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 619–629). Society of American Archivists. (Original work published 1992 in American Archivist, 55, 464–472)

Greene, M. A. (2000). "The Surest Proof": A Utilitarian Approach to Appraisal. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 301–342). Society of American Archivists. (Original work published 1998 in Archivaria, 45, 127–169)

Hardman, E. (March 19, 2026). Like it or not, AI has arrived in archives. Now is the time for archivists to take the reins. Katina Magazine. https://katinamagazine.org/content/article/open-knowledge/2026/like-it-or-not-ai-has-arrived-in-archives

Hedstrom, M. (2000). Descriptive Practices for Electronic Records: Deciding What Is Essential and Imagining What Is Possible. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 381–394). Society of American Archivists. (Original work published 1993 in Archivaria, 36, 53–63)

Henry, L. J. (1998). Schellenberg in Cyberspace. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 569–588). Society of American Archivists. (Original work published 1998 in American Archivist, 61, 309–327.

Jimerson, R. C. (2000). Redefining Archival Identity: Meeting Sser Needs in the Information Society. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 607–617). Society of American Archivists. (Original work published 1989 in American Archivist, 52, 332–340)

Michelson, A. (2000). Description and Reference in the Age of Automation. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 361–379). Society of American Archivists. (Original work published 1987 in American Archivist, 50, 192–208)

Pitti, D. V. (2000). Encoded Archival Description: The Development of an Encoding Standard for Archival Finding Aids. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 395–414). Society of American Archivists. (Original work published 1997 in American Archivist, 60, 268–283)

Yakel, E., & Bost Hensey, L. L. (2000). Understanding Administrative Use and Users in University Archives. In R. C. Jimerson (Ed.), American Archival Studies: Readings in Theory and Practice (pp. 449–471). Society of American Archivists. (Original work published 1994 in American Archivist, 57, 596–615)