Compilation and Presentation of Evidence

ARTICLE 2 October 2023

Evidence is how you or the opposing party can prove or refute the facts in your case.

When presenting evidence in a trial, it's essential to consider a series of recommendations to avoid problems in the final stages of the case, states our Head of Litigation and Arbitration Department, Rubén Rivas.

"The compilation of evidence involves the search, acquisition, and organization of documents, records, witness testimonies, experts, and any other means of proof that may be relevant to the case. This process may include research, requesting documents from third parties, conducting interviews with witnesses, and obtaining expert reports," says Rivas.

Our associate attorney lists the steps for compiling and presenting evidence:

  • Know the rules of evidence: Familiarize yourself with the specific rules and procedures governing the presentation of evidence in the court where the trial takes place. This includes knowing the objections that can be raised, authenticity requirements, and admissibility standards.
  • Gather relevant and credible evidence: Identify and carefully gather evidence that supports your case. Ensure that it is relevant to the issues in dispute and credible. This may include documents, records, witness testimonies, photographs, or other means of proof.
  • Prepare and organize your evidence: Organize your evidence clearly and systematically to facilitate its presentation at the trial. Use labels, indexes, or folders to keep it orderly and accessible. Additionally, prepare additional copies of relevant documents to share with the court, attorneys, and involved parties.
  • Obtain affidavits or testimonies: If you have relevant witnesses, make sure to obtain their affidavits or written testimonies in advance. This will allow you to present their testimonies consistently and coherently during the trial.
  • Consult experts: If the evidence requires specialized knowledge, consider consulting experts in the relevant field. These experts can provide opinions and technical analysis that support your case and help interpret the evidence more accurately.
  • Be clear and concise when presenting evidence: When presenting evidence during the trial, be clear, concise, and focused on key points. Avoid digressions or irrelevant details that may distract or confuse the court. Use charts, images, or audiovisual media if necessary to enhance the understanding of the evidence.
  • Maintain objectivity: When presenting evidence, avoid manipulating or distorting it to support your position. Evidence should be presented objectively and honestly, allowing the court to assess its weight and credibility.
  • Prepare your witnesses: If you have witnesses who will testify during the trial, make sure to prepare them adequately. Review relevant facts with them, the questions they will be asked, and potential objections. This will help ensure that they provide clear and coherent testimonies.
  • Respect the court's rules: During the presentation of evidence, follow the judge's instructions and adhere to procedural rules. Avoid unnecessary interruptions, do not interrupt opposing attorneys, and maintain a respectful tone at all times.
  • Work closely with your attorney: Collaborate closely with your attorney in the preparation and presentation of evidence. Trust their expertise and follow their advice on how to present your case more effectively.

-Written by the Torres Legal Team.

Encyclopedia Britannica

  • Games & Quizzes
  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction

Nonrational sources of evidence

Semirational sources of evidence, the influence of roman-canonical law.

  • Oral proceedings
  • The burden of proof
  • Relevance and admissibility
  • The free evaluation of evidence
  • Examination and cross-examination
  • The hearsay rule
  • Confessions and admissions
  • Party testimony
  • Expert evidence
  • Documentary evidence
  • Real evidence

See a simulated crime scene as part of teaching tool for a graduate course, Chemistry with Forensic Science

Our editors will review what you’ve submitted and determine whether to revise the article.

  • State Library of NSW - Find Legal Answers - Precedent and evidence
  • Stanford Encyclopedia of Philosophy - Evidence
  • Table Of Contents

evidence , in law, any of the material items or assertions of fact that may be submitted to a competent tribunal as a means of ascertaining the truth of any alleged matter of fact under investigation before it.

To the end that court decisions are to be based on truth founded on evidence, a primary duty of courts is to conduct proper proceedings so as to hear and consider evidence. The so-called law of evidence is made up largely of procedural regulations concerning the proof and presentation of facts, whether involving the testimony of witnesses, the presentation of documents or physical objects, or the assertion of a foreign law. The many rules of evidence that have evolved under different legal systems have, in the main, been founded on experience and shaped by varying legal requirements of what constitutes admissible and sufficient proof.

Although evidence, in this sense, has both legal and technical characteristics, judicial evidence has always been a human rather than a technical problem. During different periods and at different cultural stages, problems concerning evidence have been resolved by widely different methods. Since the means of acquiring evidence are clearly variable and delimited, they can result only in a degree of probability and not in an absolute truth in the philosophical sense. In common-law countries, civil cases require only preponderant probability, and criminal cases require probability beyond reasonable doubt. In civil-law countries so much probability is required that reasonable doubts are excluded.

The early law of evidence

Characteristic features of the law of evidence in earlier cultures were that no distinction was made between civil and criminal matters or between fact and law and that rational means of evidence were either unknown or little used. In general, the accused had to prove his innocence.

The appeal to supernatural powers was, of course, not evidence in the modern sense but an ordeal in which God was appealed to as the highest judge. The judges of the community determined what different kinds of ordeals were to be suffered, and frequently the ordeals involved threatening the accused with fire, a hot iron, or drowning. It may be that a certain awe associated with the two great elements of fire and water made them appear preeminently suitable for dangerous tests by which God himself was to pass on guilt or innocence. Trial by battle had much the same origin. To be sure, the powerful man relied on his strength, but it was also assumed that God would be on the side of right.

The accused free person could offer to exonerate himself by oath . Under these circumstances, in contrast to the ordeals, it was not expected that God would rule immediately but rather that he would punish the perjurer at a later time. Nevertheless, there was ordinarily enough realism so that the mere oath of the accused person alone was not allowed. Rather, he was ordered to swear with a number of compurgators , or witnesses, who confirmed, so to speak, the oath of the person swearing. They stood as guarantees for his oath but never gave any testimony about the facts.

presentation of evidence definition

The significance of these first witnesses is seen in the use of the German word Zeuge , which now means “witness” but originally meant “drawn in.” The witnesses were, in fact, “drawn in” to perform a legal act as instrumental witnesses. But they gave only their opinions and consequently did not testify about facts with which they were acquainted. Nevertheless, together with community witnesses, they paved the way for the more rational use of evidence.

By the 13th century, ordeals were no longer used, though the custom of trial by battle lasted until the 14th and 15th centuries. The judicial machinery destroyed by dropping these sources of evidence could not be replaced by the oath of purgation alone. With the decline of chivalry , the flourishing of the towns, the further development of Christian theology, and the formation of states, both social and cultural conditions had changed. The law of evidence, along with much of the rest of the law of Europe, was influenced strongly by Roman-canonical law elaborated by jurists in northern Italian universities. Roman law introduced elements of common procedure that became known throughout the continental European countries and became something of a uniting bond between them.

Under the new influence, evidence was, first of all, evaluated on a hierarchical basis. This accorded well with the assumption of scholastic philosophy that all the possibilities of life could be formally ordered through a system of a priori , abstract regulations. Since the law was based on the concept of the inequality of persons, not all persons were suitable as witnesses, and only the testimony of two or more suitable witnesses could supply proof.

The formal theory of evidence that grew out of this hierarchical evaluation left no option for the judge: in effect, he was required to be convinced after the designated number of witnesses had testified concordantly. A distinction was made between complete, half, and lesser portions of evidence, evading the problem posed by such a rigid system of evaluation. Since interrogation of witnesses was secret, abuses occurred on another level. These abuses were nourished by the notion that the confession was the best kind of evidence and that reliable confessions could be obtained by means of torture.

Despite these obvious drawbacks and limitations, through the ecclesiastical courts Roman-canonical law gained influence. It contributed much to the elimination of nonrational evidence from the courts, even though, given the formality of its application, it could result only in formal truths often not corresponding to reality.

The Writing Center • University of North Carolina at Chapel Hill

What this handout is about

This handout will provide a broad overview of gathering and using evidence. It will help you decide what counts as evidence, put evidence to work in your writing, and determine whether you have enough evidence. It will also offer links to additional resources.

Introduction

Many papers that you write in college will require you to make an argument ; this means that you must take a position on the subject you are discussing and support that position with evidence. It’s important that you use the right kind of evidence, that you use it effectively, and that you have an appropriate amount of it. If, for example, your philosophy professor didn’t like it that you used a survey of public opinion as your primary evidence in your ethics paper, you need to find out more about what philosophers count as good evidence. If your instructor has told you that you need more analysis, suggested that you’re “just listing” points or giving a “laundry list,” or asked you how certain points are related to your argument, it may mean that you can do more to fully incorporate your evidence into your argument. Comments like “for example?,” “proof?,” “go deeper,” or “expand” in the margins of your graded paper suggest that you may need more evidence. Let’s take a look at each of these issues—understanding what counts as evidence, using evidence in your argument, and deciding whether you need more evidence.

What counts as evidence?

Before you begin gathering information for possible use as evidence in your argument, you need to be sure that you understand the purpose of your assignment. If you are working on a project for a class, look carefully at the assignment prompt. It may give you clues about what sorts of evidence you will need. Does the instructor mention any particular books you should use in writing your paper or the names of any authors who have written about your topic? How long should your paper be (longer works may require more, or more varied, evidence)? What themes or topics come up in the text of the prompt? Our handout on understanding writing assignments can help you interpret your assignment. It’s also a good idea to think over what has been said about the assignment in class and to talk with your instructor if you need clarification or guidance.

What matters to instructors?

Instructors in different academic fields expect different kinds of arguments and evidence—your chemistry paper might include graphs, charts, statistics, and other quantitative data as evidence, whereas your English paper might include passages from a novel, examples of recurring symbols, or discussions of characterization in the novel. Consider what kinds of sources and evidence you have seen in course readings and lectures. You may wish to see whether the Writing Center has a handout regarding the specific academic field you’re working in—for example, literature , sociology , or history .

What are primary and secondary sources?

A note on terminology: many researchers distinguish between primary and secondary sources of evidence (in this case, “primary” means “first” or “original,” not “most important”). Primary sources include original documents, photographs, interviews, and so forth. Secondary sources present information that has already been processed or interpreted by someone else. For example, if you are writing a paper about the movie “The Matrix,” the movie itself, an interview with the director, and production photos could serve as primary sources of evidence. A movie review from a magazine or a collection of essays about the film would be secondary sources. Depending on the context, the same item could be either a primary or a secondary source: if I am writing about people’s relationships with animals, a collection of stories about animals might be a secondary source; if I am writing about how editors gather diverse stories into collections, the same book might now function as a primary source.

Where can I find evidence?

Here are some examples of sources of information and tips about how to use them in gathering evidence. Ask your instructor if you aren’t sure whether a certain source would be appropriate for your paper.

Print and electronic sources

Books, journals, websites, newspapers, magazines, and documentary films are some of the most common sources of evidence for academic writing. Our handout on evaluating print sources will help you choose your print sources wisely, and the library has a tutorial on evaluating both print sources and websites. A librarian can help you find sources that are appropriate for the type of assignment you are completing. Just visit the reference desk at Davis or the Undergraduate Library or chat with a librarian online (the library’s IM screen name is undergradref).

Observation

Sometimes you can directly observe the thing you are interested in, by watching, listening to, touching, tasting, or smelling it. For example, if you were asked to write about Mozart’s music, you could listen to it; if your topic was how businesses attract traffic, you might go and look at window displays at the mall.

An interview is a good way to collect information that you can’t find through any other type of research. An interview can provide an expert’s opinion, biographical or first-hand experiences, and suggestions for further research.

Surveys allow you to find out some of what a group of people thinks about a topic. Designing an effective survey and interpreting the data you get can be challenging, so it’s a good idea to check with your instructor before creating or administering a survey.

Experiments

Experimental data serve as the primary form of scientific evidence. For scientific experiments, you should follow the specific guidelines of the discipline you are studying. For writing in other fields, more informal experiments might be acceptable as evidence. For example, if you want to prove that food choices in a cafeteria are affected by gender norms, you might ask classmates to undermine those norms on purpose and observe how others react. What would happen if a football player were eating dinner with his teammates and he brought a small salad and diet drink to the table, all the while murmuring about his waistline and wondering how many fat grams the salad dressing contained?

Personal experience

Using your own experiences can be a powerful way to appeal to your readers. You should, however, use personal experience only when it is appropriate to your topic, your writing goals, and your audience. Personal experience should not be your only form of evidence in most papers, and some disciplines frown on using personal experience at all. For example, a story about the microscope you received as a Christmas gift when you were nine years old is probably not applicable to your biology lab report.

Using evidence in an argument

Does evidence speak for itself.

Absolutely not. After you introduce evidence into your writing, you must say why and how this evidence supports your argument. In other words, you have to explain the significance of the evidence and its function in your paper. What turns a fact or piece of information into evidence is the connection it has with a larger claim or argument: evidence is always evidence for or against something, and you have to make that link clear.

As writers, we sometimes assume that our readers already know what we are talking about; we may be wary of elaborating too much because we think the point is obvious. But readers can’t read our minds: although they may be familiar with many of the ideas we are discussing, they don’t know what we are trying to do with those ideas unless we indicate it through explanations, organization, transitions, and so forth. Try to spell out the connections that you were making in your mind when you chose your evidence, decided where to place it in your paper, and drew conclusions based on it. Remember, you can always cut prose from your paper later if you decide that you are stating the obvious.

Here are some questions you can ask yourself about a particular bit of evidence:

  • OK, I’ve just stated this point, but so what? Why is it interesting? Why should anyone care?
  • What does this information imply?
  • What are the consequences of thinking this way or looking at a problem this way?
  • I’ve just described what something is like or how I see it, but why is it like that?
  • I’ve just said that something happens—so how does it happen? How does it come to be the way it is?
  • Why is this information important? Why does it matter?
  • How is this idea related to my thesis? What connections exist between them? Does it support my thesis? If so, how does it do that?
  • Can I give an example to illustrate this point?

Answering these questions may help you explain how your evidence is related to your overall argument.

How can I incorporate evidence into my paper?

There are many ways to present your evidence. Often, your evidence will be included as text in the body of your paper, as a quotation, paraphrase, or summary. Sometimes you might include graphs, charts, or tables; excerpts from an interview; or photographs or illustrations with accompanying captions.

When you quote, you are reproducing another writer’s words exactly as they appear on the page. Here are some tips to help you decide when to use quotations:

  • Quote if you can’t say it any better and the author’s words are particularly brilliant, witty, edgy, distinctive, a good illustration of a point you’re making, or otherwise interesting.
  • Quote if you are using a particularly authoritative source and you need the author’s expertise to back up your point.
  • Quote if you are analyzing diction, tone, or a writer’s use of a specific word or phrase.
  • Quote if you are taking a position that relies on the reader’s understanding exactly what another writer says about the topic.

Be sure to introduce each quotation you use, and always cite your sources. See our handout on quotations for more details on when to quote and how to format quotations.

Like all pieces of evidence, a quotation can’t speak for itself. If you end a paragraph with a quotation, that may be a sign that you have neglected to discuss the importance of the quotation in terms of your argument. It’s important to avoid “plop quotations,” that is, quotations that are just dropped into your paper without any introduction, discussion, or follow-up.

Paraphrasing

When you paraphrase, you take a specific section of a text and put it into your own words. Putting it into your own words doesn’t mean just changing or rearranging a few of the author’s words: to paraphrase well and avoid plagiarism, try setting your source aside and restating the sentence or paragraph you have just read, as though you were describing it to another person. Paraphrasing is different than summary because a paraphrase focuses on a particular, fairly short bit of text (like a phrase, sentence, or paragraph). You’ll need to indicate when you are paraphrasing someone else’s text by citing your source correctly, just as you would with a quotation.

When might you want to paraphrase?

  • Paraphrase when you want to introduce a writer’s position, but their original words aren’t special enough to quote.
  • Paraphrase when you are supporting a particular point and need to draw on a certain place in a text that supports your point—for example, when one paragraph in a source is especially relevant.
  • Paraphrase when you want to present a writer’s view on a topic that differs from your position or that of another writer; you can then refute writer’s specific points in your own words after you paraphrase.
  • Paraphrase when you want to comment on a particular example that another writer uses.
  • Paraphrase when you need to present information that’s unlikely to be questioned.

When you summarize, you are offering an overview of an entire text, or at least a lengthy section of a text. Summary is useful when you are providing background information, grounding your own argument, or mentioning a source as a counter-argument. A summary is less nuanced than paraphrased material. It can be the most effective way to incorporate a large number of sources when you don’t have a lot of space. When you are summarizing someone else’s argument or ideas, be sure this is clear to the reader and cite your source appropriately.

Statistics, data, charts, graphs, photographs, illustrations

Sometimes the best evidence for your argument is a hard fact or visual representation of a fact. This type of evidence can be a solid backbone for your argument, but you still need to create context for your reader and draw the connections you want them to make. Remember that statistics, data, charts, graph, photographs, and illustrations are all open to interpretation. Guide the reader through the interpretation process. Again, always, cite the origin of your evidence if you didn’t produce the material you are using yourself.

Do I need more evidence?

Let’s say that you’ve identified some appropriate sources, found some evidence, explained to the reader how it fits into your overall argument, incorporated it into your draft effectively, and cited your sources. How do you tell whether you’ve got enough evidence and whether it’s working well in the service of a strong argument or analysis? Here are some techniques you can use to review your draft and assess your use of evidence.

Make a reverse outline

A reverse outline is a great technique for helping you see how each paragraph contributes to proving your thesis. When you make a reverse outline, you record the main ideas in each paragraph in a shorter (outline-like) form so that you can see at a glance what is in your paper. The reverse outline is helpful in at least three ways. First, it lets you see where you have dealt with too many topics in one paragraph (in general, you should have one main idea per paragraph). Second, the reverse outline can help you see where you need more evidence to prove your point or more analysis of that evidence. Third, the reverse outline can help you write your topic sentences: once you have decided what you want each paragraph to be about, you can write topic sentences that explain the topics of the paragraphs and state the relationship of each topic to the overall thesis of the paper.

For tips on making a reverse outline, see our handout on organization .

Color code your paper

You will need three highlighters or colored pencils for this exercise. Use one color to highlight general assertions. These will typically be the topic sentences in your paper. Next, use another color to highlight the specific evidence you provide for each assertion (including quotations, paraphrased or summarized material, statistics, examples, and your own ideas). Lastly, use another color to highlight analysis of your evidence. Which assertions are key to your overall argument? Which ones are especially contestable? How much evidence do you have for each assertion? How much analysis? In general, you should have at least as much analysis as you do evidence, or your paper runs the risk of being more summary than argument. The more controversial an assertion is, the more evidence you may need to provide in order to persuade your reader.

Play devil’s advocate, act like a child, or doubt everything

This technique may be easiest to use with a partner. Ask your friend to take on one of the roles above, then read your paper aloud to them. After each section, pause and let your friend interrogate you. If your friend is playing devil’s advocate, they will always take the opposing viewpoint and force you to keep defending yourself. If your friend is acting like a child, they will question every sentence, even seemingly self-explanatory ones. If your friend is a doubter, they won’t believe anything you say. Justifying your position verbally or explaining yourself will force you to strengthen the evidence in your paper. If you already have enough evidence but haven’t connected it clearly enough to your main argument, explaining to your friend how the evidence is relevant or what it proves may help you to do so.

Common questions and additional resources

  • I have a general topic in mind; how can I develop it so I’ll know what evidence I need? And how can I get ideas for more evidence? See our handout on brainstorming .
  • Who can help me find evidence on my topic? Check out UNC Libraries .
  • I’m writing for a specific purpose; how can I tell what kind of evidence my audience wants? See our handouts on audience , writing for specific disciplines , and particular writing assignments .
  • How should I read materials to gather evidence? See our handout on reading to write .
  • How can I make a good argument? Check out our handouts on argument and thesis statements .
  • How do I tell if my paragraphs and my paper are well-organized? Review our handouts on paragraph development , transitions , and reorganizing drafts .
  • How do I quote my sources and incorporate those quotes into my text? Our handouts on quotations and avoiding plagiarism offer useful tips.
  • How do I cite my evidence? See the UNC Libraries citation tutorial .
  • I think that I’m giving evidence, but my instructor says I’m using too much summary. How can I tell? Check out our handout on using summary wisely.
  • I want to use personal experience as evidence, but can I say “I”? We have a handout on when to use “I.”

Works consulted

We consulted these works while writing this handout. This is not a comprehensive list of resources on the handout’s topic, and we encourage you to do your own research to find additional publications. Please do not use this list as a model for the format of your own reference list, as it may not match the citation style you are using. For guidance on formatting citations, please see the UNC Libraries citation tutorial . We revise these tips periodically and welcome feedback.

Lunsford, Andrea A., and John J. Ruszkiewicz. 2016. Everything’s an Argument , 7th ed. Boston: Bedford/St Martin’s.

Miller, Richard E., and Kurt Spellmeyer. 2016. The New Humanities Reader , 5th ed. Boston: Cengage.

University of Maryland. 2019. “Research Using Primary Sources.” Research Guides. Last updated October 28, 2019. https://lib.guides.umd.edu/researchusingprimarysources .

You may reproduce it for non-commercial use if you use the entire handout and attribute the source: The Writing Center, University of North Carolina at Chapel Hill

Make a Gift

federal rules of evidence fingerprints

By Prof. Penny White

Federal Rules of Evidence

The Federal Rules of Evidence govern the introduction of evidence at civil and criminal trials in United States federal trial courts. The current rules were initially passed by Congress in 1975 after several years of drafting by the Supreme Court.  The rules are broken down into 11 articles:

  • General Provisions
  • Judicial Notice
  • Presumptions in Civil Actions and Proceedings
  • Relevancy and Its Limits
  • Opinions and Expert Testimony
  • Authentication and Identification
  • Contents of Writings, Recordings and Photographs
  • Miscellaneous Rules

This article will focus on Rule 901 — Authenticating or Identifying Evidence — and the judge’s role in the Federal Rules of Evidence.

Establish Evidentiary Foundations

Evidentiary foundations must be established before any type of evidence can be admitted. These predicates to admission apply regardless of whether the evidence is verbal or tangible, but for some types of evidence, the foundation is largely subsumed into the presentation of the evidence itself. For example, the foundation for verbal evidence is generally a requirement that the testifying witness have personal knowledge of the matter in question. This foundation is rarely established by asking the witness specifically whether he or she has personal knowledge. Rather, it is included in the witness’ testimony which discloses that the witness experienced the occurrence. But for all types of evidence, the evidentiary foundation requires authentication before other issues of admissibility are considered.

Tangible Items of Evidence

Scholars at common law recognized that authentication and identification of tangible items of evidence represented a “special aspect of relevancy.” McCormick §§179, 185; Morgan, Basic Problems of Evidence 378 (1962). Wigmore describes the need for authentication as “an inherent logical necessity.” 7 Wigmore §2129, p. 564. The authenticity requirement falls into the category of conditional relevancy – before the item of evidence becomes relevant and admissible, it must be established that the item is what the proponent claims.

Authentication of Tangible Items of Evidence

The basic codified standard for the authentication of tangible items of evidence is “evidence sufficient to support a finding that the item is what the proponent claims it is.” Fed. R. Evid. 901. It is not necessary that the court find that the evidence is what the proponent claims, only that there is sufficient evidence from which the jury might ultimately do so. This is a low threshold standard. The laws of evidence set forth the general standard, followed by illustrations and a list of several types of self-authenticated documents. The proponent of any tangible or documentary evidence has an obligation, or burden of proof, to authenticate the evidence before requesting to admit or publish it to the fact- finder; if the opponent objects to its admissibility, based on any of a collection of rules, then the proponent must address that admissibility objection as well. Thus, all evidence must be both authenticated and admissible.

Determine the Presentation of Evidence

If both authentication and admissibility are established, then the court must determine how the evidence will best be presented to the trier of fact, bearing in mind that the court is obligated to exercise control over the presentation of evidence to accomplish an effective, fair, and efficient proceeding. Under Federal Rules 611, the court’s duty is to “exercise reasonable control over the mode and order of examining witnesses and presenting evidence so as to:

  • Make those procedures effective for determining the truth
  • Avoid wasting time
  • Protect witnesses from harassment or undue embarrassment

Sometimes tangible evidence consists of fungible items that are not identifiable by sight. For tangible evidence that is not unique or distinctive, counsel must authenticate the item by establishing a chain of custody.

Establish a Chain of Custody

A chain of custody is, in essence, a consistent trail showing the path of the item from the time it was acquired until the moment it is presented into evidence. In establishing a chain of custody, each link in the chain should be sufficiently established. However, it is not required that the identity of tangible evidence be proven beyond all possibility of doubt. Most courts hold that “when the facts and circumstances that surround tangible evidence reasonably establish the identity and integrity of the evidence, the trial court should admit the item into evidence [but] the evidence should not be admitted, unless both identity and integrity can be demonstrated by other appropriate means.” See generally State v. Cannon, 254 S.W.3d 287, 296-97 (Tenn. 2008).

Additional Rules of Evidence Considerations for Tangible Evidence

For tangible evidence, in addition to authentication, the court must consider the following.

  • Relevance rules
  • The hearsay rules
  • The original writing rules
  • When appropriate, must balance the probative value of the tangible evidence against the dangers that its introduction may cause

The court in a jury trial must also consider what method of producing the evidence to a jury is most conducive to a fair and efficient fact-finding process.

Electronic Evidence

In order to admit electronic evidence, the same rules apply, but the content of electronic electronically stored information (ESI evidence) may implicate other rules such as the opinion rules and the personal knowledge rule. Most scholars and courts agree that the issues related to the authentication and admissibility of electronic evidence simply depend on an application of the existing evidence rules. Although technical challenges may arise, the rules are flexible enough in their approach to address this new kind of evidence.

Checklist for Authenticating Evidence in Court

The Federal Rules of Evidence apply regardless of whether the evidence is submitted in a civil case or criminal trial. To ensure that evidence is authentic and admissible, follow this five-point generic checklist for the authentication of tangible, documentary, or electronic evidence:

1. Is the evidence relevant?

Does it make a fact that is of consequence to the action more or less probable than it would be without the evidence?

2. Has the evidence been authenticated?

Has the proponent produce “evidence sufficient to support a finding that the electronic evidence is what the proponent claims?”

3. Is the evidence hearsay?

Is the evidence offered to prove the truth of what it asserts? If so, does it satisfy a hearsay exception? Are confrontation rights implicated?

4. Is the evidence a writing, recording, or photograph?

Is it offered to prove the content? If so, is it either the original or a duplicate (counterpart produced by the same impression as the original, or from the same matrix, etc.) unless genuine questions of authenticity or fairness exist?

5. Is the probative value of the evidence substantially outweighed by the danger of unfair prejudice, confusion of the issues, or misleading the jury, or by considerations of undue delay, waste of time, or needless presentation of cumulative evidence?

Of course, there are many other tools that a judge may use to rule on tangible and electronic evidence, each with its own benefits and limitations.

Penny White is the Director of the Center for Advocacy and Elvin E. Overton Distinguished Professor of Law at the University of Tennessee College of Law. She teaches in several of NJC’s evidence courses including Fundamentals of Evidence, Advanced Evidence, and Criminal Evidence.

presentation of evidence definition

After 22 years of teaching judges, Tennessee Senior Judge Don Ash will retire as a regular faculty member a...

presentation of evidence definition

This month’s one-question survey* of NJC alumni asked, “How is 2024 shaping up for you and your court?�...

RENO, NV (PNS) – As they eye their inaugural football season this fall, the Gaveliers have question marks...

presentation of evidence definition

RENO, Nev. (March 8, 2024) — In what may reflect a devastating blow to the morale of the judiciary, 9 out...

Download a PDF of our 2024 course list

Definition of Evidence

Gathering and submitting evidence, types of evidence, scientific evidence, trace evidence, about dna evidence, physical evidence, testimonial evidence, circumstantial evidence, hearsay evidence, exculpatory evidence, rules of evidence, scott peterson and the circumstantial evidence, related legal terms and issues.

10 Steps for Presenting Evidence in Court

When you go to court, you will give information (called “evidence”) to a judge who will decide your case. This evidence may include information you or someone else tells to the judge (“testimony”) as well as items like email and text messages, documents, photos, and objects (“exhibits”). If you don’t have an attorney, you will need to gather and present your evidence in the proper way. Courts have rules about evidence so that judges will make decisions based on good information, not gossip and guesswork.

Although the rules can be confusing, they are designed to protect your rights, and you can use them to help you plan for your court appearance. Even though courts work differently, this publication will introduce you to the nuts and bolts of presenting evidence at a hearing. As you read it, please consider the kind of help you might want as you prepare and present your case.

With funding from the Office of Juvenile Justice and Delinquency Prevention (OJJDP), the National Council of Juvenile and Family Court Judges (NCJFCJ) developed the Implementation Sites Project to assist juvenile and family courts to integrate…

Juvenile justice system professionals are often unaware of or misinterpret the circumstances of commercially sexually exploited youths. As a result, many victims of sexual exploitation are criminalized instead of being diverted to appropriate resources and…

The technical assistance brief,  A Template Guide to Develop a Memorandum of Understanding Between a Military Installation and a Court, is a tool for courts and military installations to utilize to enter into agreements that may…

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

6.2: Defining Evidence

  • Last updated
  • Save as PDF
  • Page ID 152119

  • Jim Marteney
  • Los Angeles Valley College via ASCCC Open Educational Resources Initiative (OERI)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

What is evidence? According to Reike and Sillars (1993), ”[e]vidence refers to specific instances, statistics, and testimony, when they support a claim in such a way as to cause the decision maker(s) to grant adherence to that claim” (p. 10).

Screen Shot 2020-09-06 at 4.35.47 PM.png

Evidence is information that answers the question “ How do you know ? ” of a contention you have made. Please take that question very literally. It is often hard to tell the difference at first between telling someone what you know and telling them how you know it. To become an effective speaker in almost any context, you need to be able to ask this question repeatedly and test the answers you hear to determine the strength of the evidence.

Only experts can use phrases like "I think" or "I feel" or "I believe" as they have the qualifications needed that allow you to accept their observations. As for everyone else, we need to use evidence to support our arguments. As a critical thinker, you should rely much more on what a person can prove to a reasonable degree instead of what a person feels.

Evidence is a term commonly used to describe the supporting material utilized when informing or persuading others. Evidence gives support to your statements and arguments. It also makes your arguments more than a mere collection of personal opinions or prejudices. No longer are you saying, “ I believe ” or “ I think ” or “ In my opinion .” Now you can support your assertions with evidence. Because you are asking your audience to take a risk when you attempt to inform or persuade them, audiences will demand support for your assertions. Evidence needs to be carefully chosen to serve the needs of the claim and to reach the target audience.

An argument is designed to persuade a resistant audience to accept a claim via the presentation of evidence for the contentions being argued. The evidence establishes the amount of accuracy your arguments have. Evidence is one element of proof (the second is reasoning), that is used as a means of moving your audience toward the threshold necessary for them to grant adherence to your arguments.

The speaker should expect audiences to not be persuaded by limited evidence or by a lack of variety/scope, evidence drawn from only one source as opposed to diverse sources. On the other hand, too much evidence, particularly when not carefully crafted, may leave the audience overwhelmed and without focus. Evidence in support of the different contentions in the argument needs to make the argument reasonable enough to be accepted by the target audience.

Challenge of Too Much Evidence

I attended a lecture years ago where the guest speaker told us that we have access to more information in one edition of the New York Times than a man in the middle ages had in his entire lifetime. The challenge is not finding information, the challenge is sorting through information to find quality evidence to use in our speeches . Shenk (1997) expresses his concern in the first chapter:

Information has also become a lot cheaper--to produce, to manipulate, to disseminate. All of this has made us information-rich, empowering Americans with the blessings of applied knowledge. It has also, though, unleashed the potential of information-gluttony...How much of the information in our midst is useful, and how much of it gets in the way? ... As we have accrued more and more of it, information has emerged not only as a currency, but also as a pollutant (p. 23).

  • In 1971 the average American was targeted by at least 560 daily advertising messages. Twenty years later, that number had risen six-fold to 3,000 messages per day.
  • In the office, an average of 60 percent of each person's time is now spent processing documents.
  • Paper consumption per capita in the United States tripled from 1940 to 1980 (from 200 to 600 pounds) and tripled again from 1980 to 1990 (to 1,800 pounds).
  • In the 1980s, third-class mail (used to send publications) grew thirteen times faster than population growth.
  • Two-thirds of business managers surveyed report tension with colleagues, loss of job satisfaction, and strained personal relationships as a result of information overload.
  • More than 1,000 telemarketing companies employ four million Americans, and generate $650 billion in annual sales.

"Let us call this unexpected, unwelcome part of our atmosphere "data smog," an expression for the noxious muck and druck of the information age. Data smog gets in the way; it crowds out quiet moments and obstructs much-needed contemplation. It spoils conversation, literature, and even entertainment. It thwarts skepticism, rendering us less sophisticated as consumers and citizens. It stresses us out” (Shenk, 1997, p. 24).

We need ways of sorting through this information and the first method is understanding the different types of evidence that we encounter.

Sources of Evidence

The first aspect of evidence we need to explore is the actual source of evidence or where we find evidence. There are two primary sources of evidence; primary sources and secondary sources.

Primary Sources

A primary source provides direct or firsthand evidence about an event, object, person, or work of art. Primary sources include historical and legal documents, eyewitness accounts, results of experiments, statistical data, pieces of creative writing, audio and video recordings, speeches, and art objects. Interviews, surveys, fieldwork, and Internet communications via email, blogs, tweets, and newsgroups are also primary sources. In the natural and social sciences, primary sources are often empirical studies—research where an experiment was performed or a direct observation was made. The results of empirical studies are typically found in scholarly articles that are peer-reviewed (Ithica College, 2019)

Included in primary sources are:

  • Original, first-hand accounts of events, activities, or time periods;
  • Factual accounts instead of interpretations of accounts or experiments;
  • Results of an experiment;
  • Reports of scientific discoveries;
  • Results of scientifically based polls.

Secondary Sources

Secondary sources describe, discuss, interpret, comment upon, analyze, evaluate, summarize, and process primary sources. Secondary source materials can be articles in newspapers or popular magazines, book, movie reviews, or articles found in scholarly journals that discuss or evaluate someone else's original research (Ithica, 2019).

Included in secondary sources are:

  • Analyzation and interpretation of the accounts of primary sources;
  • Secondhand account of an activity or historical event;
  • Analyzation and interpretation of scientific or social research results.

The key difference between the two sources is how far the author of the evidence is removed from the original event. You want to ask, " Is the author giving you a firsthand account, or a secondhand account? "

Types of Evidence

There are five types of evidence critical thinkers can use to support their arguments: precedent evidence, statistical evidence, testimonial evidence, hearsay evidence, and common knowledge evidence .

Precedent evidence is an act or event which establishes expectations for future conduct. There are two forms of precedent evidence: legal and personal.

Legal precedent is one of the most powerful and most difficult types of evidence to challenge. Courts establish legal precedent. Once a court makes a ruling, that ruling becomes the legal principle upon which other courts base their actions. Legislatures can also establish precedent through the laws they pass and the laws they choose not to pass. Once a principle of law has been established by a legislative body, it is very difficult to reverse.

Personal precedents are the habits and traditions you maintain. They occur as a result of watching the personal actions of others in order to understand the expectations for future behaviors. Younger children in a family watch how the older children are treated in order to see what precedents are being established. Newly employed on a job watch to see what older workers do in terms of breaks and lunchtime in order that their actions may be consistent. The first months of a marriage is essentially a time to establish precedent. Who does the cooking, who takes out the garbage, who cleans, which side of the bed does each person get, are precedents established early in a marriage. Once these precedents are displayed, an expectation of the other’s behavior is established. Such precedent is very difficult to alter.

To use either type of precedent as evidence, the arguer refers to how the past event relates to the current situation. In a legal situation, the argument is that the ruling in the current case should be the same as it was in the past, because they represent similar situations. In a personal situation, if you were allowed to stay out all night by your parents "just once," you can use that "just once" as precedent evidence when asking that your curfew be abolished.

Statistical evidence consists primarily of polls, surveys, and experimental results from the laboratory. This type of evidence is the numerical reporting of specific instances. Statistical evidence provides a means for communicating a large number of specific instances without citing each one. Statistics can be manipulated and misused to make the point of the particular advocate.

Don’t accept statistics just because they are numbers. People often fall into the trap of believing whatever a number says, because numbers seem accurate. Statistics are the product of a process subject to human prejudice, bias, and error. Questions on a survey can be biased, the people surveyed can be selectively chosen, comparisons may be made of non-comparable items, and reports of findings can be slanted. Take a look at all the polls that predict an election outcome. You will find variances and differences in the results.

Statistics have to be interpreted. In a debate over the use of lie detector tests to determine guilt or innocence in court, the pro-side cited a study which found that 98% of lie detector tests were accurate. The pro-side interpreted this to mean that lie detector tests were an effective means for determining guilt or innocence. However, the con-side interpreted the statistic to mean that two out of every 100 defendants in this country would be found guilty and punished for a crime they did not commit.

Screen Shot 2020-09-06 at 4.44.01 PM.png

The great baseball announcer Vin Scully once described the misuse of statistics by a journalist by saying that “ He uses statistics like a drunk uses a lamppost, not for illumination but for support

Statistics are often no more reliable than other forms of evidence, although people often think they are. Advocates need to carefully analyze how they use statistics when attempting to persuade others. Likewise, the audience needs to question statistics that don't make sense to them.

Testimonial evidence is used for the purpose of assigning motives, assessing responsibilities, and verifying actions for past, present and future events. Testimony is an opinion of reality as stated by another person. There are three forms of testimonial evidence: eyewitness, expert-witness, and historiography.

Eyewitness testimony is a personal declaration as to the accuracy of an event. That is, the person actually saw an event take place and is willing to bear witness to that event. Studies have confirmed that eyewitness testimony, even with all of its problems, is a powerful form of evidence. There seems to be almost something "magical" about a person swearing to "tell the whole truth and nothing but the truth."

Expert-witness evidence calls upon someone qualified to make a personal declaration about the nature of the fact in question. Courts of law make use of experts in such fields as forensics, ballistics, and psychology. The critical thinker uses the credibility of another person to support an argument through statements about the facts or opinions of the situation.

What or who qualifies as an expert witness? Does being a former military officer make them an expert in military tactics? Often an advocate will merely pick someone who they know the audience will accept. But as an audience we should demand that advocates justify the expertise of their witness. As we acquire more knowledge, our standards of what constitutes an expert should rise. We need to make a distinction between sources that are simply credible like well-known athletes and entertainers that urge you to buy a particular product, and those who really have the qualities that allow them to make a judgment about a subject in the argumentative environment.

Although expert witness testimony is an important source of evidence, such experts can disagree. In a recent House Energy and Commerce subcommittee, two experts gave opposite testimony, on the same day, on a bill calling for a label on all aspirin containers warning of the drug's often fatal link to Reye's Syndrome. The head of the American Academy of Pediatrics gave testimony supporting the link, but Dr. Joseph White, President of The Aspirin Foundation of America, said there was insufficient evidence linking aspirin to Reye’s syndrome.

Historiography is the third form of testimonial evidence. In their book, ARGUMENTATION AND ADVOCACY, Windes and Hastings write, "Historiographers are concerned in large part with the discovery, use, and verification of evidence. The historian traces influences, assigns motives, evaluates roles, allocates responsibilities, and juxtaposes events in an attempt to reconstruct the past. That reconstruction is no wiser, no more accurate or dependable than the dependability of the evidence the historian uses for his reconstruction." 5

Keep in mind that there are many different ways of determining how history happens. Remember, historians may disagree over why almost any event happened. In the search for how things happen, we get ideas about how to understand our present world's events and what to do about them, if anything.

Primary sources are essential to the study of history. They are the basis for what we know about the distant past and the recent past. Historians must depend on other evidence from the era to determine who said what, who did what, and why.

How successful is the historian in recreating “objective reality?" As noted historian Arthur Schlesinger, Jr. says,

“The sad fact is that, in many cases, the basic evidence for the historian’s reconstruction of the really hard cases does not exist, and the evidence that does exist is often incomplete, misleading, or erroneous. Yet, it is the character of the evidence which establishes the framework within which he writes. He cannot imagine scenes for which he has no citation, invent dialogue for which he has no text, assume relationships for which he has no warrant.”

Historical reconstruction must be done by a qualified individual to be classified as historical evidence. Critical thinkers will find it useful to consider the following three criteria for evaluating historical evidence.

Around 1,000 books are published internationally every day and the total of all printed knowledge doubles every 5 years.

More information is estimated to have been produced in the last 30 years than in the previous 5,000.

----The Reuters Guide to Good Information Strategy 2000

Was the author an eyewitness to what is being described, or is the author considered an authority on the subject? Eyewitness accounts can be the most objective and valuable but they may also be tainted with bias. If the author professes to be an authority, he/she should present his/her qualifications.

Does the author have a hidden agenda? The author may purposely or unwittingly tell only part of the story. The excerpt may seem to be a straight-forward account of the situation, yet the author has selected certain facts, details, and language, which advance professional, personal or political goals or beliefs. They may be factual, but the hidden agenda of these books was to make money for the author, or get even with those in the administration they didn't like.

Does the author have a bias? The author's views may be based on personal prejudice rather than a reasoned conclusion based on facts. Critical thinkers need to notice when the author uses exaggerated language, fails to acknowledge, or dismisses his or her opponents' arguments. Historians may have biases based on their political allegiance. Conservative historians would view events differently than a liberal historian. It is important to know the political persuasion of the historian in order to determine the extent of bias he or she might have on the specific topic they are writing about.

Screen Shot 2020-09-06 at 4.49.05 PM.png

Sometimes we think we might know our history, but Historian Daniel Boorstin puts a perspective on the ultimate validity and accuracy of historical testimony when he writes, "Education is learning what you didn't even know you didn't know." Modern techniques of preserving data should make the task of recreating the past easier and adding to our education.

Hearsay evidence (also called rumor or gossip evidence) can be defined as an assertion or set of assertions widely repeated from person to person, though its accuracy is unconfirmed by firsthand observation. "Rumor is not always wrong , " wrote Tacitus, the Roman historian. A given rumor may be spontaneous or premeditated in origin. It may consist of opinion represented as fact, a nugget of accuracy garbled or misrepresented to the point of falsehood, exaggerations, or outright, intentional lies. Yet, hearsay may well be the "best available evidence" in certain situations where the original source of the information cannot be produced.

Rumor, gossip or hearsay evidence carries proportionately higher risks of distortion and error than other types of evidence. However, outside the courtroom, it can be as effective as any other form of evidence in proving your point. Large companies often rely on this type of evidence, because they lack the capability to deliver other types of evidence.

A recent rumor was started that actor Morgan Freeman had died. A page on “Facebook” was created and soon gained more that 60,000 followers, after it was announced that the actor had passed away. Many left their condolences and messages of tribute. Only one problem, Morgan Freeman was very much alive, actually that is not so much a problem, especially to Morgan Freeman. The Internet is a very effective tool when it comes to spreading rumors.

Common knowledge evidence is also a way to support one’s arguments. This type of evidence is most useful in providing support for arguments which lack any real controversy. Many claims are supported by evidence that comes as no particular surprise to anyone.

Basing an argument on common knowledge is the easiest method of securing belief in an idea, because an audience will accept it without further challenge. Patterson and Zarefsky (1983) explain:

Many argumentative claims we make are based on knowledge generally accepted by most people as true. For example, if you claimed that millions of Americans watch television each day, the claim would probably be accepted without evidence. Nor would you need to cite opinions or survey results to get most people to accept the statement that millions of people smoke cigarettes 6 (Pat.

Credibility of Evidence or How Good Is It?

In order to tell us how you know something, you need to tell us where the information came from. If you personally observed the case you are telling us about, you need to tell us that you observed it, and when and where. If you read about it, you need to tell us where you read about it. If you are accepting the testimony of an expert, you need to tell us who the expert is and why she is an expert in this field. The specific identity, name or position and qualifications of your sources are part of the answer to the question “How do you know?” You need to give your audience that information.

Keep in mind that it is the person, the individual human being, who wrote an article or expressed an idea who brings authority to the claim. Sometimes that authority may be reinforced by the publication in which the claim appeared, sometimes not. But when you quote or paraphrase a source you are quoting or paraphrasing the author, not the magazine or journal. The credibility of the evidence you use can be enhanced by:

Specific Reference to Source : Does the advocate indicate the particular individual or group making the statements used for evidence? Does the advocate tell you enough about the source that you could easily find it yourself?

Qualifications of the Source: Does the advocate give you reason to believe that the source is competent and well-informed in the area in question?

Bias of the Source : Even if an expert, is the source likely to be biased on the topic? Could we easily predict the source’s position merely from knowledge of his job, her political party, or organizations he or she works for?

Factual Support: Does the source offer factual support for the position taken or simply state personal opinions as fact?

Evaluating Internet Sources of Evidence

We currently obtain a significant amount of the evidence we use in an argument from the Internet. Some people are still under the influence that if they read it on the Internet, it must be accurate. But we all know that some Internet sources are better than others. We need to be able to evaluate websites to obtain the best information possible. Here are two approaches to evaluating websites

Who, What, When, Where, and Why

This first test is based on the traditional 5 “W’s.” These questions, like critical thinking, go back to Greek and Roman times. The notable Roman, Cicero, who was in office in 63 BC, is credited with asking these questions

Journalists are taught to answer these five questions when writing an article for publication. To provide an accurate interpretation of events to their viewers or readers, they ask these five questions and we can ask the same questions to begin discovering the level of quality of an online source.

Who wrote the post? What are their qualifications?

What is actually being said in the website. How accurate is the content?

When was the website’s latest post?

Where is the source of the post? Does the URL suggest it is from an academic source or an individual?

Why is the website published? Is the website there to inform or entertain?

There is a second method of evaluating websites that is more popular and includes a more in depth analysis. This method is known as the CRAAP test.

The C.R.A.A.P. Test

C.R.A.A.P. is an acronym standing for Currency, Relevance, Authority, Accuracy, and Purpose. Developed by the Meriam Library at the California State University at Chico, each of these five areas is used to evaluate websites.

Currency How recent is this website. If you are conducting research on some historical subject a web site that has no recent additions could be useful. If, however you are researching some current news story, or technology, or scientific topic, you will want a site that has been recently updated.

Questions to Ask:

  • When was the content of the website published or posted?
  • Has the information been revised or updated recently?
  • Have more recent articles on your subject been published?
  • Does your topic require the most current information possible, or will older posts and sources be acceptable?
  • Are the web links included in the website functional?
  • Relevance This test of a website asks you how important is the information to the specific topic you are researching. You will want to determine if you are the intended audience and if the information provided fits your research needs.
  • Does the content relate to your research topic or the question you are answering?
  • Who is the intended audience?
  • Is the information at an appropriate level for the purpose of your work? In other words, is it college level or targeted to a younger or less educated audience?
  • Have you compared this site to a variety of other resources?
  • Would you be comfortable citing this source in your research project?

Authority Here we determine if the source of the website has the credentials to write on the subject which makes you feel comfortable in using the content. If you are looking for an accurate interpretation of news events, you will want to know if the author of the website is a qualified journalist or a random individual reposting content.

  • Who is the author/ publisher/ source/ sponsor of the website?
  • What are the author’s credentials or organizational affiliations?
  • Does the author have the qualifications to write on this particular topic?
  • Can you find information about the author from reference sources or the Internet?
  • Is the author quoted or referred to on other respected sources or websites?
  • Is there contact information, such as a publisher or email address?
  • Does the URL reveal anything about the author or source?

Accuracy In this test we attempt to determine the reliability and accuracy of the content of the website. You need to determine if you can trust the information presented in the website or is it just slanted, personal beliefs.

  • Where does the information in the website come from?
  • Is the information supported by Evidence, or is it just opinion?
  • Has the information presented been reviewed by qualified sources?
  • Can you verify any of the content in another source or personal knowledge?
  • Are there statements in the website you know to be false?
  • Does the language or tone used in the website appear unbiased or free of emotion or loaded language?
  • Are there spelling, grammar or typographical errors in the content of the website?

Purpose Finally we examine the purpose of the website. We need to determine if the website was created to inform, entertain or even sell a product or service. If we want accurate, high quality evidence, we would want to avoid a site that is trying to sell us something. Although a company selling solar power may have some factual information about solar energy on their site, the site is geared to sell you their product. The information they provide is not there to educate you with all aspects of solar power.

  • What is the purpose of the content of this website? Is the purpose to inform, teach, sell, entertain or persuade?
  • Do the authors/sponsors of the website make their intentions or purpose clear?
  • Is the content in the website considered facts, opinion, or even propaganda?
  • Does the point of view appear objective and impartial?
  • Does the author omit important facts or data that might disprove the claim being made in the post?
  • Are alternative points of view presented?
  • Does the content of the website contain political, ideological, cultural, religious, institutional or personal biases?

Questions used here are inspired from questions from the Meriam Library at California State University Chico, the University of Maryland University College Library and Creighton University Library

Screen Shot 2020-09-06 at 4.59.33 PM.png

  • Rieke, Richard D. and Malcolm Sillars. Argumentation and Critical Decision Making. (New York: HaperCollins Rhetoric and Society Series, 1993)
  • Shenk, David. Data Smog, Surviving the Information Glut. 1. San Fransisco: HarperEdge, 1997
  • Ithica College, "Primary and Secondary Sources," libguides.ithaca.edu/research101/primary (accessed October 31, 2019)
  • ARGUMENTATION AND ADVOCACY. By Russel R. Windes and Arthur Hastings. New York: Random House, 1965
  • Patterson, J. W. and David Zarefsky. Contemporary Debate. Boston: Houghton Mifflin, 1983

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The Legal Concept of Evidence

The legal concept of evidence is neither static nor universal. Medieval understandings of evidence in the age of trial by ordeal would be quite alien to modern sensibilities (Ho 2003–2004) and there is no approach to evidence and proof that is shared by all legal systems of the world today. Even within Western legal traditions, there are significant differences between Anglo-American law and Continental European law (see Damaška 1973, 1975, 1992, 1994, 1997). This entry focuses on the modern concept of evidence that operates in the legal tradition to which Anglo-American law belongs. [ 1 ] It concentrates on evidence in relation to the proof of factual claims in law. [ 2 ]

It may seem obvious that there must be a legal concept of evidence that is distinguishable from the ordinary concept of evidence. After all, there are in law many special rules on what can or cannot be introduced as evidence in court, on how evidence is to be presented and the uses to which it may be put, on the strength or sufficiency of evidence needed to establish proof and so forth. But the law remains silent on some crucial matters. In resolving the factual disputes before the court, the jury or, at a bench trial, the judge has to rely on extra-legal principles. There have been academic attempts at systematic analysis of the operation of these principles in legal fact-finding (Wigmore 1937; Anderson, Schum, and Twining 2009). These principles, so it is claimed, are of a general nature. On the basis that the logic in “drawing inferences from evidence to test hypotheses and justify conclusions” is governed by the same principles across different disciplines (Twining and Hampsher-Monk 2003: 4), ambitious projects have been undertaken to develop a cross-disciplinary framework for the analysis of evidence (Schum 1994) and to construct an interdisciplinary “integrated science of evidence” (Dawid, Twining, and Vasilaki 2011; cf. Tillers 2008).

While evidential reasoning in law and in other contexts may share certain characteristics, there nevertheless remain aspects of the approach to evidence and proof that are distinctive to law (Rescher and Joynt 1959). Section 1 (“conceptions of evidence”) identifies different meanings of evidence in legal discourse. When lawyers talk about evidence, what is it that they are referring to? What is it that they have in mind? Section 2 (“conditions for receiving evidence”) approaches the concept of legal evidence from the angle of what counts as evidence in law. What are the conditions that the law imposes and must be met for something to be received by the court as evidence? Section 3 (“strength of evidence”) shifts the attention to the stage where the evidence has already been received by the court. Here the focus is on how the court weighs the evidence in reaching the verdict. In this connection, three properties of evidence will be discussed: probative value, sufficiency, and degree of completeness.

1. Conceptions of Evidence: What does Evidence Refer to in Law?

2.1.1 legal significance of relevance, 2.1.2 conceptions of logical relevance, 2.1.3 logical relevance versus legal relevance, 2.2 materiality and facts-in-issue, 2.3.1 admissibility and relevance, 2.3.2 admissibility or exclusionary rules, 3.1 probative value of specific items of evidence, 3.2.1 mathematical probability and the standards of proof, 3.2.2 objections to using mathematical probability to interpret standards of proof, 3.3 the weight of evidence as the degree of evidential completeness, other internet resources, related entries.

Stephen (1872: 3–4, 6–7) long ago noted that legal usage of the term “evidence” is ambiguous. It sometimes refers to that which is adduced by a party at the trial as a means of establishing factual claims. (“Adducing evidence” is the legal term for presenting or producing evidence in court for the purpose of establishing proof.) This meaning of evidence is reflected in the definitional section of the Indian Evidence Act (Stephen 1872: 149). [ 3 ] When lawyers use the term “evidence” in this way, they have in mind what epistemologists would think of as “objects of sensory evidence” (Haack 2004: 48). Evidence, in this sense, is divided conventionally into three main categories: [ 4 ] oral evidence (the testimony given in court by witnesses), documentary evidence (documents produced for inspection by the court), and “real evidence”; the first two are self-explanatory and the third captures things other than documents such as a knife allegedly used in committing a crime.

The term “evidence” can, secondly, refer to a proposition of fact that is established by evidence in the first sense. [ 5 ] This is sometimes called an “evidential fact”. That the accused was at or about the scene of the crime at the relevant time is evidence in the second sense of his possible involvement in the crime. But the accused’s presence must be proved by producing evidence in the first sense. For instance, the prosecution may call a witness to appear before the court and get him to testify that he saw the accused in the vicinity of the crime at the relevant time. Success in proving the presence of the accused (the evidential fact) will depend on the fact-finder’s assessment of the veracity of the witness and the reliability of his testimony. (The fact-finder is the person or body responsible for ascertaining where the truth lies on disputed questions of fact and in whom the power to decide on the verdict vests. The fact-finder is also called “trier of fact” or “judge of fact”. Fact-finding is the task of the jury or, for certain types of cases and in countries without a jury system, the judge.) Sometimes the evidential fact is directly accessible to the fact-finder. If the alleged knife used in committing the crime in question (a form of “real evidence”) is produced in court, the fact-finder can see for himself the shape of the knife; he does not need to learn of it through the testimony of an intermediary.

A third conception of evidence is an elaboration or extension of the second. On this conception, evidence is relational. A factual proposition (in Latin, factum probans ) is evidence in the third sense only if it can serve as a premise for drawing an inference (directly or indirectly) to a matter that is material to the case ( factum probandum ) (see section 2.2 below for the concept of materiality). The fact that the accused’s fingerprints were found in a room where something was stolen is evidence in the present sense because one can infer from this that he was in the room, and his presence in the room is evidence of his possible involvement in the theft. On the other hand, the fact that the accused’s favorite color is blue would, in the absence of highly unusual circumstances, be rejected as evidence of his guilt: ordinarily, what a person’s favorite color happens to be cannot serve as a premise for any reasonable inference towards his commission of a crime and, as such, it is irrelevant (see discussion of relevance in section 2.1 below). In the third sense of “evidence”, which conceives of evidence as a premise for a material inference, “irrelevant evidence” is an oxymoron: it is simply not evidence. Hence, this statement of Bentham (1825: 230): [ 6 ]

To say that testimony is not pertinent, is to say that it is foreign to the case, has no connection with it, and does not serve to prove the fact in question; in a word, it is to say, that it is not evidence.

There can be evidence in the first sense without evidence in the second or third sense. To pursue our illustration, suppose it emerges during cross-examination of the expert that his testimony of having found a finger-print match was a lie. Lawyers would describe this situation as one where the “evidence” (the testimony of the expert) fails to prove the fact that it was originally produced to prove and not that no “evidence” was adduced on the matter. Here “evidence” is used in the first sense—evidence as testimony—and the testimony remains in the court’s record whether it is believed or not. But lawyers would also say that, in the circumstances, there is no “evidence” that the accused was in the room, assuming that there was nothing apart from the discredited expert testimony of a fingerprint match to establish his presence there. Here, the expert’s testimony is shown to be false and fails to establish that the accused’s fingerprints were found in the room, and there is no (other) factual basis for believing that he was in the room. The factual premise from which an inference is sought to be drawn towards the accused’s guilt is not established.

Fourthly, the conditions for something to be received (or, in technical term “admitted”) as evidence at the trial are sometimes included in the legal concept of evidence. (These conditions are discussed in section 2 below.) On this conception, legal evidence is that which counts as evidence in law. Something may ordinarily be treated as evidence and yet be rejected by the court. Hearsay is often cited as an example. It is pointed out that reliance on hearsay is a commonplace in ordinary life. We frequently rely on hearsay in forming our factual beliefs. In contrast, “hearsay is not evidence” in legal proceedings (Stephen 1872: 4–5). As a general rule, the court will not rely on hearsay as a premise for an inference towards the truth of what is asserted. It will not allow a witness to testify in court that another person X (who is not brought before the court) said that p on a certain occasion (an out-of-court statement) for the purpose of proving that p .

In summary, at least four possible conceptions of legal evidence are in currency: as an object of sensory evidence, as a proposition of fact, as an inferential premise and as that which counts as evidence in law. The sense in which the term “evidence” is being used is seldom made explicit in legal discourse although the intended meaning will often be clear from the context.

2. Conditions for Receiving Evidence: What Counts as Evidence in Law?

This section picks up on the fourth conception of evidence. To recall, something will be accepted by the court as evidence—it is, to use Montrose’s term, receivable as evidence in legal proceedings—only if three basic conditions are satisfied: relevance , materiality, and admissibility (Montrose 1954). These three conditions of receivability are discussed in turn below.

2.1 Relevance

The concept of relevance plays a pivotal role in legal fact-finding. Thayer (1898: 266, 530) articulates its significance in terms of two foundational principles of the law of evidence: first, without exception, nothing which is not relevant may be received as evidence by the court and secondly, subject to many exceptions and qualifications, whatever is relevant is receivable as evidence by the court. Thayer’s view has been influential and finds expression in sources of law, for example, in Rule 402 of the Federal Rules of Evidence in the United States. [ 7 ] Thayer claims, and it is now widely accepted, that relevance is a “logical” and not a legal concept; in section 2.1.3 , we will examine this claim and the dissent expressed by Wigmore. Leaving aside the dissenting view for the moment, we will turn first to consider possible conceptions of relevance in the conventional sense of logical relevance.

Evidence may be adduced in legal proceedings to prove a fact only if the fact is relevant. Relevance is a relational concept. No fact is relevant in itself; it is relevant only in relation to another fact. The term “probable” is often used to describe this relation. We see two instances of this in the following well-known definitions. According to Stephen (1886: 2, emphasis added):

The word “relevant” means that any two facts to which it is applied are so related to each other that according to the common course of events one either taken by itself or in connection with other facts proves or renders probable the past, present, or future existence or non-existence of the other.

The second definition is contained in the United States’ Federal Rule of Evidence 401 which (in its restyled version) states that evidence is relevant if “it has a tendency to make a fact more or less probable than it would be without the evidence” (emphasis added). The word “probable” in these and other standard definitions is sometimes construed as carrying the mathematical meaning of probability. [ 8 ] In a leading article, Lempert gave this example to show how relevance turns on the likelihood ratio. The prosecution produces evidence that the perpetrator’s blood found at the scene of the crime is type A. The accused has the same blood type. Suppose fifty percent of the suspect population has type A blood. If the accused is in fact guilty, the probability that the blood found at the scene will be type A is 1.0. But if he is in fact innocent, the probability of finding type A blood at the scene is 0.5—that is, it matches the background probability of type A blood from the suspect population. The likelihood ratio is the ratio of the first probability to the second—1.0:0.5 or, more simply, 2:1. Evidence is considered relevant so long as the likelihood ratio is other than 1:1 (Lempert 1977). If the ratio is 1:1, that means that the probability of the evidence is the same whether the accused is guilty or innocent.

The conventional view is that relevance in law is a binary concept: evidence is either relevant or it is not. So long as the likelihood ratio is other than 1:1, the evidence is considered relevant. [ 9 ] However, the greater the likelihood ratio deviates from 1:1, the higher the so-called probative value of the evidence (that is, on one interpretation of probative value). We will take a closer look at probative value in section 3.1 below.

While the likelihood ratio may be useful as a heuristic device in analysing evidential reasoning, it is controversial as to whether it captures correctly the concept of relevance. In the first place, it is unclear that the term “probable” in the standard definitions of relevance was ever intended as a reference to mathematical probability. Some have argued that relevance should be understood broadly such that any evidence would count as relevant so long as it provides some reason in support of the conclusion that a proposition of fact material to the case is true or false (Pardo 2013: 576–577).

The mathematical conception of relevance has been disputed. At a trial, it is very common for the opposing sides to present competing accounts of events that share certain features. To use Allen’s example, the fact that the accused drove to a particular town on a particular day and time is consistent with the prosecution’s case that he was driving there to commit a murder and also with the defence’s case that he was driving there to visit his mother. This fact, being a common feature of both sides’ explanations of the material events, is as consistent with the hypothesis of guilt as with the hypothesis of innocence. On the likelihood ratio conception of relevance, this fact should be irrelevant and hence evidence of it should not be allowed to be adduced. But in such cases, the court will let the evidence in (Park et al. 2010: 10). The mathematical theory of relevance cannot account for this. (For critical discussion of this claim, see section 4.2 of the entry on legal probabilism .) It is argued that an alternative theory of relevance better fits legal practice and is thus to be preferred. On an explanatory conception of relevance, evidence is relevant if it is explained by or provides a reason for believing the particular explanation of the material events offered by the side adducing the evidence, and it remains relevant even where, as in our example, the evidence also supports or forms part of the explanation offered by the opponent (Pardo and Allen 2008: 241–2; Pardo 2013: 600).

One possible response to the above challenge to the likelihood ratio theory of relevance is to deny that it was ever meant to be the exclusive test of relevance. Evidence is relevant if the likelihood ratio is other than 1:1. But evidence may also be relevant on other grounds, such as when it provides for a richer narrative or helps the court in understanding other evidence. It is for these reasons that witnesses are routinely allowed to give their names and parties may present diagrams, charts and floor plans (so-called “demonstrative evidence”) at the trial (McCormick 2013: 995). The admission of evidence in the scenario painted by Allen above has been explained along a similar line (Park et al. 2010: 16).

The concept of relevance examined in the preceding section is commonly known as “logical relevance”. This is somewhat of a misnomer: “Relevance is not a matter of logic, but depends on matters of fact” (Haack 2004: 46). In our earlier example, the relevance of the fact that the accused has type A blood depends obviously on the state of the world. On the understanding that relevance is a probabilistic relation, it is tempting to think that in describing relevance as “logical”, one is subscribing to a logical theory of probability (cf. Franklin 2011). However, the term “logical relevance” was not originally coined with this connotation in mind. In the forensic context, “logic” is used loosely and refers to the stock of background beliefs or generalisations and the type of reasoning that judges and lawyers are fond of labelling as “commonsense” (MacCrimmon 2001–2002; Twining 2006: 334–335).

A key purpose of using the adjective “logical” is to flag the non-legal character of relevance. As Thayer (1898: 269) famously claimed, relevance “is an affair of logic and not of law.” This is not to say that relevance has no legal dimension. The law distinguishes between questions of law and questions of fact. An issue of relevance poses a question of law that is for the judge to decide and not the jury, and so far as relevance is defined in legal sources (for example, in Federal Rule of Evidence 401 mentioned above), the judge must pay heed to the legal definition. But legal definitions of relevance are invariably very broad. Relevance is said to be a logical, and non-legal, concept in the sense that in answering a question of relevance and in applying the definition of relevance, the judge has necessarily to rely on extra-legal resources and is not bound by legal precedents. Returning to Federal Rule of Evidence 401, it states generally that evidence is relevant if “it has a tendency to make a fact more or less probable than it would be without the evidence”. In deciding whether the evidence sought to be adduced does have this tendency, the judge has to look outside the law. Thayer was most insistent on this. As he put it, “[t]he law furnishes no test of relevancy. For this, it tacitly refers to logic and general experience” (Thayer 1898: 265). That the accused’s favorite color is blue is, barring extraordinary circumstances, irrelevant to the question of his intention to commit theft. It is not the law that tells us so but “logic and general experience”. On Thayer’s view, the law does not control or regulate the assessment of relevance; it assumes that judges are already in possession of the (commonsense) resources to undertake this assessment.

Wigmore adopts a different position. He argues, against Thayer, that relevance is a legal concept. There are two strands to his contention. The first is that for evidence to be relevant in law, “a generally higher degree of probative value” is required “than would be asked in ordinary reasoning”:

legal relevance denotes…something more than a minimum of probative value. Each single piece of evidence must have a plus value. (cf. Pattenden 1996–7: 373)

As Wigmore sees it, the requirement of “plus value” guards against the jury “being satisfied by matters of slight value, capable to being exaggerated by prejudice and hasty reasoning” (Wigmore 1983b: 969, cf. 1030–1031). Opponents of Wigmore acknowledge that there may be sound policy reasons for excluding evidence of low probative value. Receiving the evidence at the trial might raise a multiplicity of issues, incur too much time and expense, confuse the jurors or produce undue prejudice in their mind. When the judge excludes evidence for any of these reasons, and the judge has the discretion to do so in many countries, the evidence is excluded despite it being relevant (e.g., United States’ Federal Rule of Evidence 403). Relevance is a relation between facts and the aforesaid reasons for exclusion are extrinsic to that relation; they are grounded in considerations such as limitation of judicial resources and jury psychology. The notion of “plus value” confuses relevance with extraneous considerations (James 1941; Trautman 1952).

There is a second strand to Wigmore’s contention that relevance is a legal concept. Relevance is legal in the sense that the judge is bound by previously decided cases (“judicial precedents”) when he has to make a ruling on the relevance of a proposed item of evidence.

So long as Courts continue to declare…what their notions of logic are, just so long will there be rules of law which must be observed. (Wigmore 1983a: 691)

Wigmore cites in support the judgment of Cushing C.J. in State v LaPage where it was remarked:

[T]here are many instances in which the evidence of particular facts as bearing on particular issues has been so often the subject of discussion in courts of law, and so often ruled upon, that the united logic of a great many judges and lawyers may be said to furnish…the best evidence of what may be properly called common -sense, and thus to acquire the authority of law. (1876 57 N.H. 245 at 288 [Supreme Court, New Hampshire])

Wigmore’s position on relevance is strangely at odds with his strong stand against the judge being bound by judicial precedents in assessing the weight or credibility of evidence (Wigmore 1913). More importantly, the second strand of his argument also does not sit well with the first strand. If, as Wigmore contends, evidence must have a plus value to make it legally relevant, the court has to consider the probative value of the evidence and to weigh it against the amount of time and expense likely to be incurred in receiving the evidence, the availability of other evidence, the risk of the evidence misleading or confusing the trier of fact and so forth. Given that the assessment of plus value and, hence, legal relevance is so heavily contextual, it is difficult to see how a judicial precedent can be of much value in another case in determining a point of legal relevance (James 1941: 702).

We have just considered the first condition of receivability, namely, relevance. That fact A is relevant to fact B is not sufficient to make evidence of fact A receivable in court. In addition, B must be a “material” fact. The materiality of facts in a particular case is determined by the law applicable to that case. In a criminal prosecution, it depends on the law which defines the offence with which the accused is charged and at a civil trial, the law which sets out the elements of the legal claim that is being brought against the defendant (Wigmore 1983a, 15–19; Montrose 1954: 536–537).

Imagine that the accused is prosecuted for the crime of rape and the alleged victim’s behaviour (fact A ) increases the probability that she had consented to have sexual intercourse with the accused (fact B ). On the probabilistic theory of relevance that we have considered, A is relevant to B . Now suppose that the alleged victim is a minor. Under criminal law, it does not matter whether she had consented to the sexual intercourse. If B is of no legal consequence, the court will not allow evidence of A to be adduced for the purpose of proving B : the most obvious reason is that it is a waste of time to receive the evidence.

Not all material facts are necessarily in dispute. Suppose the plaintiff sues the defendant for breach of contract. Under the law of contract, to succeed in this action, the plaintiff must prove the following three elements: that there was a contract between the parties, that the defendant was in breach of the contract, and that the plaintiff had suffered loss as a result of that breach. The defendant may concede that there was a contract and that he was in breach of it but deny that the plaintiff had suffered any loss as a result of that breach. In such a situation, only the last of the material facts is disputed. Following Stephen’s terminology, a disputed material fact is called a “fact in issue” (Stephen 1872: 9).

The law does not allow evidence to be adduced to prove facts that are immaterial. Whether evidence may be adduced to prove a material fact may depend on whether the material fact is disputed; for instance, the requirement that it must be disputed exists under Rule 210 of the Evidence Code of California but not Rule 401 of the Federal Rules of Evidence in the United States. “Relevance” is often used in the broader sense that encompasses the concepts under discussion. Evidence is sometimes described as “irrelevant” not for the reason that no logical inference can be drawn to the proposition that is sought to be proved (in our example, A is strictly speaking relevant to B ) but because that proposition is not material or not disputed (in our example, B is not material). [ 10 ] This broader usage of the term “relevance”, though otherwise quite harmless, does not promote conceptual clarity because it runs together different concepts (see James 1941: 690–691; Trautman 1952: 386; Montrose 1954: 537).

2.3 Admissibility

A further condition must be satisfied for evidence to be received in legal proceedings. There are legal rules that prohibit evidence from being presented at a trial even though it is relevant to a factual proposition that is material and in issue. These rules render the evidence to which they apply “inadmissible” and require the judge to “exclude” it. Two prominent examples of such rules of admissibility or rules of exclusion are the rule against hearsay evidence and the rule against character evidence. This section considers the relation between the concept of relevance and the concept of admissibility. The next section ( section 2.3.2 ) discusses general arguments for and against exclusionary or admissibility rules.

Here, again, the terminology is imprecise. Admissibility and receivability are not clearly distinguished. It is common for irrelevant evidence, or evidence of an immaterial fact to be described as “inadmissible”. What this means is that the court will refuse to receive evidence if it is irrelevant or immaterial. But, importantly, the court also excludes evidence for reasons other than irrelevance and immateriality. For Montrose, there is merit in restricting the concept of “inadmissibility” to the exclusion of evidence based on those other reasons (Montrose 1954: 541–543). If evidence is rejected on the ground of irrelevance, it is, as Thayer (1898: 515) puts it, “the rule of reason that rejects it”; if evidence is rejected under an admissibility or exclusionary rule, the rejection is by force of law. The concepts of admissibility and materiality should also be kept apart. This is because admissibility or exclusionary rules serve purposes and rationales that are distinct from the law defining the crime or civil claim that is before the court and it is this law that determines the materiality of facts in the dispute.

Thayer (1898: 266, 530) was influential in his view that the law of evidence has no say on logical relevance and that its main business is in dealing with admissibility. If the evidence is logically irrelevant, it must for that reason be excluded. If the evidence is logically relevant, it will be received by the court unless the law—in the form of an exclusionary or admissibility rule—requires its exclusion. In this scheme, the concept of relevance and the concept of admissibility are distinct: indeed, admissibility rules presuppose the relevance of the evidence to which they apply.

Stephen appears to hold a different view, one in which the concept of admissibility is apparently absorbed by the concept of relevance. Take, for example, Stephen’s analysis of the rule that in general no evidence may be adduced to prove “statements as to facts made by persons not called as witnesses”, in short, hearsay (Stephen 1872: 122). As a general rule, no evidence may be given of hearsay because the law prohibits it. The question then arises as to the rationale for this prohibition. Stephen’s answer to this question is often taken to be that hearsay is not “relevant” and he is criticised for failing to see the difference between relevance and admissibility (Whitworth 1881: 3; Thayer 1898: 266–268; Pollock 1876, 1899; Wigmore 1983a: §12). His critics point out that hearsay has or can have probative value and evidence of hearsay is excluded despite or regardless of its relevance. On the generalisation that there is no smoke without fire, the fact that a person claimed that p in a statement made out-of-court does or can have a bearing on the probability that p , and p may be (logically relevant to) a material fact in the dispute.

Interestingly, Stephen seemed to have conceded as much. He acknowledged that a policeman or a lawyer engaged in preparing a case would be negligent if he were to shut his ears to hearsay. Hearsay is one of those facts that are “apparently relevant but not really so” (Stephen 1872: 122; see also Stephen 1886: xi). In claiming that hearsay is irrelevant, Stephen appears to be merely stating the effect of the law: the law requires that hearsay be treated as irrelevant. He offered a variety of justifications for excluding hearsay evidence: its admissibility would “present a great temptation to indolent judges to be satisfied with second-hand reports” and “open a wide door to fraud”, with the result that “[e]veryone would be at the mercy of people who might tell a lie, and whose evidence could neither be tested nor contradicted” (Stephen 1872: 124–125). For his detractors, these are reasons of policy and fairness and it disserves clarity to sneak such considerations into the concept of relevance.

Although there is force to the criticism that Stephen had unhelpfully conflated admissibility and relevance (understood as logical relevance), something can perhaps be said in his defence. Exclusionary rules or rules of admissibility—at any rate, many of them—are more accurately seen as excluding forms of reasoning rather than prohibiting proof of certain types of facts (McNamara 1986). This is certainly true of the hearsay rule. On one authoritative definition of the rule (decision of the Privy Council in Subramaniam v PP , (1956) 1 Weekly Law Reports 965), what it prohibits is the use of a hearsay statement to prove the truth of the facts asserted therein. [ 11 ] The objection is to the drawing of the inference that p from X ’s out-of-court statement that p where X is not available to be examined in court. But the court will allow the evidence of X ’s hearsay statement to be admitted—it will allow proof of the statement— where the purpose of adducing the evidence is to persuade the court that X did make the statement and this fact is relevant for some other purpose. For instance, it may be relevant as to the state of mind of the person hearing the statement, and his state of mind may be material to his defence of having acted under duress. Hence, two writers have commented that “there is no such thing as hearsay evidence , only hearsay uses ” (Roberts and Zuckerman 2010: 385).

Other admissibility rules are also more accurately seen as targeted at forms of reasoning and not types of facts. In the United States, Federal Rule of Evidence 404(a)(1) bars the use of evidence of a person’s character “to prove that on a particular occasion the person acted in accordance with the character” and Federal Rule of Evidence 404(b)(1) provides that evidence of a crime or wrong

is not admissible to prove a person’s character in order to show that on a particular occasion the person acted in accordance with the character.

It is doubtful that evidence of a person’s character and past behaviour can have no probabilistic bearing on his behaviour on a particular occasion; on a probabilistic conception of relevance, it is difficult to see why the evidence is not relevant. Even so, there may be policy, moral or other reasons for the law to prohibit certain uses of character evidence. In declaring a fact as irrelevant for a particular purpose, we are not necessarily saying or implying anything about probability. We may be expressing a normative judgment. For policy, moral or other reasons, the law takes the position that hearsay or the accused’s character or previous misconduct must not be used as the premise for a particular line of reasoning. The line of reasoning might be morally objectionable (“give a dog a bad name and hang him for it”) or it might be unfair to permit the drawing of the inference when the opponent was not given a fair opportunity to challenge it (as in the hearsay situation) (Ho 2008: chs. 5, 6). If we take a normative conception of relevance instead of a logical or probabilistic one, it is not an abuse of language to describe inadmissible evidence as irrelevant if what is meant is that the evidence ought not to be taken into account in a certain way.

On one historical account, admissibility or exclusionary rules are the product of the jury system where citizens untrained in assessing evidence sit as judges of fact. These rules came about because it was thought necessary to keep away from inexperienced jurors certain types of evidence that may mislead or be mishandled by them—for instance, evidence to which they are likely to give too much weight or that carries the risk of creating unfair prejudice in their minds (Thayer 1898; Wigmore 1935: 4–5). Epistemic paternalism is supposedly at play (Leiter 1997: 814–5; Allen and Leiter 2001: 1502). Subscription to this theory has generated pressure for the abolition of exclusionary rules with the decline of the jury system and the replacement of lay persons with professional judges as triers of fact. There is doubt as to the historical accuracy of this account; at any rate, it does not appear capable of explaining the growth of all exclusionary rules (Morgan 1936–37; Nance 1988: 278–294).

Even if the theory is right, it does not necessarily follow that exclusionary rules should be abolished once the jury system is removed. Judges may be as susceptible to the same cognitive and other failings as the jury and there may be the additional risk that judges may over-estimate their own cognitive and intellectual abilities in their professional domain. Hence, there remains a need for the constraints of legal rules (Schauer 2006: 185–193). But the efficacy of these rules in a non-jury system is questionable. The procedural reality is that judges will have to be exposed to the evidence in order to decide on its admissibility. Since a judge cannot realistically be expected to erase the evidence from his mind once he has decided to exclude it, there seems little point in excluding the evidence; we might as well let the evidence in and allow judge to give the evidence the probative value that it deserves (Mnookin 2006; Damaška 2006; cf. Ho 2008: 44–46).

Bentham was a strong critic of exclusionary rules. He was much in favour of “freedom of proof” understood as free access to information and the absence of formal rules that restrict such access (Twining 2006: 232, n 65). The direct object of legal procedure is the “rectitude of decision”, by which he means the correct application of substantive law to true findings of facts. The exclusion of relevant evidence—evidence capable of casting light on the truth—is detrimental to this end. Hence, no relevant evidence should be excluded; the only exceptions he would allow are where the evidence is superfluous or its production would involve preponderant delay, expense or vexation (Bentham 1827: Book IX; Bentham 1825: Book VII; Twining 1985: ch. 2). Bentham’s argument has been challenged on various fronts. It is said that he overvalued the pursuit of truth, undervalued procedural fairness and procedural rights, and placed too much faith in officials, underestimating the risk of abuse when they are given discretion unfettered by rules (Twining 1985: 70–71).

Even if we agree with Bentham that rectitude of decision is the aim of legal procedure and that achieving accuracy in fact-finding is necessary to attain this aim, it is not obvious that a rule-based approach to admissibility will undermine this aim in the long run. Schauer has defended exclusionary rules of evidence along a rule-consequentialist line. Having the triers of fact follow rules on certain matters instead of allowing them the discretion to exercise judgment on a case-by-case basis may produce the greatest number of favourable outcomes in the aggregate. It is in the nature of a formal rule that it has to be followed even when doing so might not serve the background reason for the rule. If hearsay evidence is thought to be generally unreliable, the interest of accuracy may be better served overall to require such evidence to be excluded without regard to its reliability in individual cases. Given the imperfection of human reason and our suspicion about the reasoning ability of the fact-finder, allowing decisions to be taken individually on the reliability and admissibility of hearsay evidence might over time produce a larger proportion of misjudgements than on the rule-based approach (Schauer 2006: 180–185; Schauer 2008). However, this argument is based on a large assumption about the likely effects of having exclusionary rules and not having them, and there is no strong empirical basis for thinking that the consequences are or will be as alleged (Goldman 1999: 292–295; Laudan 2006: 121–122).

Other supporters of exclusionary rules build their arguments on a wide range of different considerations. The literature is too vast to enter into details. Here is a brief mention of some arguments. On one theory, some exclusionary rules are devices that serve as incentives for lawyers to produce the epistemically best evidence that is reasonably available (Nance 1988, 2016: 195–201). For example, if lawyers are not allowed to rely on second-hand (hearsay) evidence, they will be forced to seek out better (first-hand) evidence. On another theory, exclusionary rules allocate the risks of error. Again, consider hearsay. The problem with allowing a party to rely on hearsay evidence is that the opponent has no opportunity to cross-examine the original maker of the statement and is thus deprived of an important means of attacking the reliability of the evidence. Exclusionary rules in general insulate the party against whom the evidence is sought to be adduced from the risks of error that the evidence, if admitted, would have introduced. The distribution of such risks is said to be a political decision that should not be left to the discretion of individual fact-finders (Stein 2005; cf. Redmayne 2006 and Nance 2007a: 154–164). It has also been argued that the hearsay rule and the accompanying right to confront witnesses promote the public acceptance and stability of legal verdicts. If the court relies on direct evidence, it can claim superior access to the facts (having heard from the horse’s mouth, so to speak) and this also reduces the risk of new information emerging after the trial to discredit the inference that was drawn from the hearsay evidence (the original maker of the statement might turn up after the trial to deny the truth of the statement that was attributed to him) (Nesson 1985: 1372–1375; cf. Park 1986; Goldman 1999: 282; Goldman 2005: 166–167).

3. Strength of Evidence

The decision whether to allow a party to adduce a particular item of evidence is one that the judge has to make and arises in the course of a trial. Section 2 above dealt with the conditions that must be satisfied for a witness’s testimony, a document or an object to be received as evidence. At the end of the trial, the fact-finder must consider all the evidence that has been presented and reach a verdict. Although verdict deliberation is sometimes subjected to various forms of control through legal devices such as presumptions and corroboration rules, such control is limited and the fact-finder is expected to exercise personal judgment in the evaluation of evidence (Damaška 2019). Having heard or seen the evidence, the fact-finder now has to evaluate or ‘weigh’ it in reaching the verdict. Weight can refer to any of the following three properties of evidence: (a) the probative value of individual items of evidence, (b) the sufficiency of the whole body of evidence adduced at the trial in meeting the standard of proof, or (c) the relative completeness of this body of evidence. The first two aspects of weight are familiar to legal practitioners but the third has been confined to academic discussions. These three ideas are discussed in the same order below.

In reaching the verdict, the trier of fact has to assess the probative value of the individual items of evidence which have been received at the trial. The concept of probative value can also play a role at the prior stage (which was the focus in section 2 ) where the judge has to make a ruling on whether to receive the evidence in the first place. In many legal systems, if the judge finds the probative value of a proposed item of evidence to be low and substantially outweighed by countervailing considerations, such as the risk of causing unfair prejudice or confusion, the judge can refuse to let the jury hear or see the evidence (see, e.g., Rule 403 of the United States’ Federal Rules of Evidence).

The concept of probative value (or, as it is also called, probative force) is related to the concept of relevance. Section 2.1.2 above introduced and examined the claim that the likelihood ratio is the measure of relevance. To recapitulate, the likelihood of an item of evidence, E (in our previous example, the likelihood of a blood type match) given a hypothesis H (that the accused is in fact guilty) is compared with the likelihood of E given the negation of H (that the accused is in fact innocent). Prior to the introduction of E , one may have formed some belief about H based on other evidence that one already has. This prior belief does not affect the likelihood ratio since its computation is based on the alternative assumptions that H is true and that H is false (Kaye 1986a; Kaye and Koehler 2003; cf. Davis and Follette 2002 and 2003). Rulings on relevance are made by the judge when objections of irrelevance are raised in the course of the trial. The relevance of an item of evidence is supposedly assessed on its own, without consideration of other evidence, and, indeed, much of the other evidence may have yet to presented at the point when the judge rules on the relevance of a particular item of evidence (Mnookin 2013: 1544–5). [ 12 ]

Probative value, as with relevance, has been explained in terms of the likelihood ratio (for detailed examples, see Nance and Morris 2002; Finkelstein and Levin 2003). It was noted earlier that evidence is either relevant or not, and, on the prevailing understanding, it is relevant so long as the likelihood ratio deviates from 1:1. But evidence can be more or less probative depending on the value of the likelihood ratio. In our earlier example, the probative value of a blood type match was 1.0:0.5 (or 2:1) as 50% of the suspect population had the same blood type as the accused. But suppose the blood type is less common and only 25% of the suspect population has it. The probative value of the evidence is now 1.0:0.25 (or 4:1). In both cases, the evidence is relevant; but the probative value is greater in the latter than in the former scenario. It is tempting to describe probative value as the degree of relevance but this would be misleading as relevance in law is a binary concept.

There is a second way of thinking about probative value. On the second view, but not on the first, the probative value of an item of evidence is assessed contextually. The probative value of E may be low given one state of the other evidence and substantial given a different body of other evidence (Friedman 1986; Friedman and Park 2003; cf. Davis and Follette 2002, 2003). Where the other evidence shows that a woman had died from falling down an escalator at a mall while she was out shopping, her husband’s history of spousal battery is unlikely to have any probative value in proving that he was responsible for her death. But where the other evidence shows that the wife had died of injuries in the matrimonial home, and the question is whether the injuries were sustained from an accidental fall from the stairs or inflicted by the husband, the same evidence of spousal battery will now have significant probative value.

On the second view, the probative value of an item of evidence ( E ) is not measured simply by the likelihood ratio as it is on the first view. Probative value is understood as the degree to which E increases (or decreases) the probability of the proposition or hypothesis ( H ) in support of (or against) which E is led. The probative value of E is measured by the difference between the probability of H given E (the posterior probability) and the probability of H absent E (the prior probability) (Friedman 1986; James 1941: 699).

Probative value of \(E = P(H | E) - P(H)\)

\(P(H | E)\) (the posterior probability) is derived by applying Bayes’ theorem—that is, by multiplying the prior probability by the likelihood ratio (see discussion in section 3.2.2 below). On the present view, while the likelihood ratio does not itself measure the probative value of E , it is nevertheless a crucial component in the assessment.

A major difficulty with both of the mathematical conceptions of probative value that we have just examined is that for most evidence, obtaining the figures necessary for computing the likelihood ratio is problematic (Allen 1991: 380). Exceptionally, quantitative base rates data exist, as in our blood type example. Where objective data is unavailable, the fact-finder has to draw on background experience and knowledge to come up with subjective values. In our blood type example, a critical factor in computing the likelihood ratio was the percentage of the “suspect population” who had the same blood type as the accused. “Reference class” is the general statistical term for the role that the suspect population plays in this analysis. How should the reference class of “suspect population” be defined? Should we look at the population of the country as a whole or of the town or the street where the alleged murder occurred? What if it occurred at an international airport where most the people around are foreign visitors? Or what if it is shown that both the accused and the victim were at the time of the alleged murder inmates of the same prison? Should we then take the prison population as the reference class? The distribution of blood types may differ according to which reference class is selected. Sceptics of mathematical modelling of probative value emphasize that data from different reference classes will have different explanatory power and the choice of the reference class is open to—and should be subjected to—contextual argument and requires the exercise of judgment; there is no a priori way of determining the correct reference class. (On the reference class problem in legal factfinding, see, in addition to references cited in the rest of this section, Colyvan, Regan, and Ferson 2001; Tillers 2005; Allen and Roberts 2007.)

Some writers have proposed quantifiable ways of selecting, or assisting in the selection, of the appropriate reference class. On one suggestion, the court does not have to search for the optimal reference class. A general characteristic of an adversarial system of trial is that the judge plays a passive role; it is up to the parties to come up with the arguments on which they want to rely and to produce evidence in support of their respective arguments. This adversarial setting makes the reference class problem more manageable as the court need only to decide which of the reference classes relied upon by the parties is to be preferred. And this can be done by applying one of a variety of technical criteria that statisticians have developed for comparing and selecting statistical models (Cheng 2009). Another suggestion is to use the statistical method of “feature selection” instead. The ideal reference class is defined by the intersection of all relevant features of the case, and a feature is relevant if it is correlated to the matter under enquiry (Franklin 2010, 2011: 559–561). For instance, if the amount of drug likely to be smuggled is reasonably believed to co-vary with the airport through which it is smuggled, the country of origin and the time period, and there is no evidence that any other feature is relevant on which data is available, the ideal reference class is the class of drug smugglers passing through that airport originating from that country and during that time period. Both suggestions have self-acknowledged limitations: not least, they depend on the availability of suitable data. Also, as Franklin stresses, while statistical methods “have advice to offer on how courts should judge quantitative evidence”, they do so “in a way that supplements normal intuitive legal argument rather than replacing it by a formula” (Franklin 2010: 22).

The reference class problem is not confined to the probabilistic assessment of the probative value of individual items of evidence. It is a general difficulty with a mathematical approach to legal proof. In particular, the same problem arises on a probabilistic interpretation of the standard of proof when the court has to determine whether the standard is met based on all the evidence adduced in the case. This topic is explored in section 3.2 below but it is convenient at this juncture to illustrate how the reference class problem can also arise in this connection. Let it be that the plaintiff sues Blue Bus Company to recover compensation for injuries sustained in an accident. The plaintiff testifies, and the court believes on the basis of his testimony, that he was run down by a recklessly driven bus. Unfortunately, it was dark at the time and he cannot tell whether the bus belonged to Blue Bus Company. Assume further that there is also evidence which establishes that Blue Bus Company owns 75% of the buses in the town where the accident occurred and the remaining 25% is owned by Red Bus Company. No other evidence is presented. To use the data as the basis for inferring that there is 0.75 probability that the bus involved in the accident was owned by Blue Bus Company would seem to privilege the reference class of “buses operating in the town” over other possible reference classes such as “buses plying the street where the accident occurred” or “buses operating at the time in question” (Allen and Pardo 2007a: 109). Different reference classes may produce very different likelihood ratios. It is crucial how the reference class is chosen and this is ultimately a matter of argument and judgment. Any choice of reference class (other than the class that shares every feature of the particular incident, which is, in effect, the unique incident itself) is in principle contestable.

Critics of the mathematization of legal proof raise this point as an example of inherent limitations to the mathematical modelling of probative value (Allen and Pardo 2007a). [ 13 ] Allen and Pardo propose an alternative, the explanatory theory of legal proof. They claim that this theory has the advantage of avoiding the reference class problem because it does not attempt to quantify probative value (Pardo 2005: 374–383; Pardo and Allen 2008: 261, 263; Pardo 2013: 600–601). Suppose a man is accused of killing his wife. Evidence is produced of his extra-marital affair. The unique probative value of the accused’s infidelity cannot be mathematically computed from statistical base rates of infidelity and uxoricides (husbands murdering wives). In assessing its probative value, the court should look instead at how strongly the evidence of infidelity supports the explanation of the material events put forward by the side adducing the evidence and how strongly it challenges the explanation offered by the opponent. For instance, the prosecution may be producing the evidence to buttress its case that the accused wanted to get rid of his wife so that he could marry his mistress, and the defence may be advancing the alternative theory that the couple was unusual in that they condoned extra-marital affairs and had never let it affect their loving relationship. How much probative value the evidence of infidelity has depends on the strength of the explanatory connections between it and the competing hypotheses, and this is not something that can be quantified.

But the disagreement in this debate is not as wide as it might appear. The critics concede that formal models for evaluating evidence in law may be useful. What they object to is

scholarship arguing … that such models establish the correct or accurate probative value of evidence, and thus implying that any deviations from such models lead to inaccurate or irrational outcomes. (Allen and Pardo 2007b: 308)

On the other side, it is acknowledged that there are limits to mathematical formalisation of evidential reasoning in law (Franklin 2012: 238–9) and that context, argument and judgment do play a role in identifying the reference class (Nance 2007b).

3.2 Sufficiency of Evidence and the Standards of Proof

In the section 3.1 above, we concentrated on the weight of evidence in the sense of probative value of individual items of evidence. The concept of weight can also apply to the total body of evidence presented at the trial; here “weight” is commonly referred to as the “sufficiency of evidence”. [ 14 ] The law assigns the legal burden of proof between parties to a dispute. For instance, at a criminal trial, the accused is presumed innocent and the burden is on the prosecution to prove that he is guilty as charged. To secure a conviction, the body of evidence presented at the trial must be sufficient to meet the standard of proof. Putting this generally, a verdict will be given in favour of the side bearing the legal burden of proof only if, having considered all of the evidence, the fact-finder is satisfied that the applicable standard of proof is met. The standard of proof has been given different interpretations.

On one interpretation, the standard of proof is a probabilistic threshold. In civil cases, the standard is the “balance of probabilities” or, as it is more popularly called in the United States, the “preponderance of evidence”. The plaintiff will satisfy this standard and succeed in his claim only if there is, on all the evidence adduced in the case, more than 0.5 probability of his claim being true. At criminal trials, the standard for a guilty verdict is “proof beyond a reasonable doubt”. Here the probabilistic threshold is thought to be much higher than 0.5 but courts have eschewed any attempt at authoritative quantification. Typically, a notional value, such as 0.9 or 0.95, is assumed by writers for the sake of discussion. For the prosecution to secure a guilty verdict, the evidence adduced at the trial must establish the criminal charge to a degree of probability that crosses this threshold. Where, as in the United States, there is an intermediate standard of “clear and convincing evidence” which is reserved for special cases, the probabilistic threshold is said to lie somewhere between 0.5 and the threshold for proof beyond reasonable doubt.

Kaplan was among the first to employ decision theory to develop a framework for setting the probabilistic threshold that represents the standard of proof. Since the attention in this area of the law tends to be on the avoidance of errors and their undesirable consequences, he finds it convenient to focus on disutility rather than utility. The expected disutility of an outcome is the product of the disutility (broadly, the social costs) of that outcome and the probability of that outcome. Only two options are generally available to the court: in criminal cases, it must either convict or acquit the accused and in civil cases, it has to give judgment either for the plaintiff or for the defendant. At a criminal trial, the decision should be made to convict where the expected disutility of a decision to acquit is greater than the expected disutility of a decision to convict. This is so as to minimize the expected disutilities. To put this in the form of an equation:

P is the probability that the accused is guilty on the basis of all the evidence adduced in the case, Dag is the disutility of acquitting a guilty person and Dci is the disutility of convicting an innocent person. A similar analysis applies to civil cases: the defendant should be found liable where the expected disutility of finding him not liable when he is in fact liable exceeds the expected disutility of finding him liable when he is in fact not liable.

On this approach, a person should be convicted of a crime only where P is greater than:

The same formula applies in civil cases except that the two disutilities ( Dag and Dci ) will have to be replaced by their civil equivalents (framed in terms of the disutility of awarding the judgment to a plaintiff who in fact does not deserve it and disutility of awarding the judgment to a defendant who in fact does not deserve it). On this formula, the crucial determinant of the standard of proof is the ratio of the two disutilities. In the civil context, the disutility of an error in one direction is deemed equal to the disutility of an error in the other direction. Hence, a probability of liability of greater than 0.5 would suffice for a decision to enter judgment against the defendant (see Redmayne 1996: 171). The situation is different at a criminal trial. Dci , the disutility of convicting an innocent person is considered far greater than Dag , the disutility of acquitting a guilty person. [ 15 ] Hence, the probability threshold for a conviction should be much higher than 0.5 (Kaplan 1968: 1071–1073; see also Cullison 1969).

An objection to this analysis is that it is incomplete. It is not enough to compare the costs of erroneous verdicts. The utility of an accurate conviction and the utility of an accurate acquittal should also be considered and factored into the equation (Lillquist 2002: 108). [ 16 ] This results in the following modification of the formula for setting the standard of proof:

Ucg is the utility of convicting the guilty, Uag is the utility of acquitting the guilty, Uai is the utility of acquitting the innocent and Uci the utility of convicting the innocent.

Since the relevant utilities depend on the individual circumstances, such as the seriousness of the crime and the severity of the punishment, the decision-theoretic account of the standard of proof would seem, on both the simple and the modified version, to lead to the conclusion that the probabilistic threshold should vary from case to case (Lillquist 2002; Bartels 1981; Laudan and Saunders 2009; Ribeiro 2019). In other words, the standard of proof should be a flexible or floating one. This view is perceived to be problematic.

First, it falls short descriptively. The law requires the court to apply a fixed standard of proof for all cases within the relevant category. In theory, all criminal cases are governed by the same high standard and all civil cases are governed by the same lower standard. That said, it is unclear whether factfinders in reality adhere strictly to a fixed standard of proof (see Kaplow 2012: 805–809).

The argument is better interpreted as a normative argument—as advancing the claim about what the law ought to be and not what it is. The standard of proof ought to vary from case to case. But this proposal faces a second objection. For convenience, the objection will be elaborated in the criminal setting; in principle, civil litigants have the same two rights that we shall identify. According to Dworkin (1981), moral harm arises as an objective moral fact when a person is erroneously convicted of a crime. Moral harm is distinguished from the bare harm (in the form of pain, frustration, deprivation of liberty and so forth) that is suffered by a wrongfully convicted and punished person. While accused persons have the right not to be convicted if innocent, they do not have the right to the most accurate procedure possible for ascertaining their guilt or innocence. However, they do have the right that a certain weight or importance be attached to the risk of moral harm in the design of procedural and evidential rules that affect the level of accuracy. Accused persons have the further right to a consistent weighting of the importance of moral harm and this further right stems from their right to equal concern and respect. Dworkin’s theory carries an implication bearing on the present debate. It is arguable that to adopt a floating standard of proof would offend the second right insofar as it means treating accused persons differently with respect to the evaluation of the importance of avoiding moral harm. This difference in treatment is reflected in the different level of the risk of moral harm to which they are exposed.

There is a third objection to a floating standard of proof. Picinali (2013) sees fact-finding as a theoretical exercise that engages the question of what to believe about the disputed facts. What counts as “reasonable” for the purposes of applying the standard of proof beyond reasonable doubt is accordingly a matter for theoretical as opposed to practical reasoning. Briefly, theoretical reasoning is concerned with what to believe whereas practical reasoning is about what to do. Only reasons for belief are germane in theoretical reasoning. While considerations that bear on the assessment of utility and disutility provide reasons for action, they are not reasons for believing in the accused’s guilt. Decision theory cannot therefore be used to support a variable application of the standard of proof beyond reasonable doubt.

The third criticism of a flexible standard of proof does not directly challenge the decision-theoretic analysis of the standard of proof. On that analysis, it would seem that the maximisation of expected utility is the criterion for selecting the appropriate probabilistic threshold to apply and it plays no further role in deciding whether that threshold, once selected, is met on the evidence adduced in the particular case. It is not incompatible with the decision-theoretic analysis to insist that the question of whether the selected threshold is met should be governed wholly by epistemic considerations. However, it is arguable that what counts as good or strong enough theoretical reason for judging, and hence believing, that something is true is dependent on the context, such as what is at stake in believing that it is true. More is at stake at a trial involving the death penalty than in a case of petty shop-lifting; accordingly, there should be stronger epistemic justification for a finding of guilt in the first than in the second case. Philosophical literature on epistemic contextualism and on interest-relative accounts of knowledge and justified belief has been drawn upon to support a variant standard of proof (Ho 2008: ch. 4; see also Amaya 2015: 525–531). [ 17 ]

The premise of the third criticism is that the trier of fact has to make a finding on a disputed factual proposition based on his belief in the proposition. This is contentious. Beliefs are involuntary; we cannot believe something by simply deciding to believe it. The dominant view is that beliefs are context-independent; at any given moment, we cannot believe something in one context and not believe it in another. On the other hand, legal fact-finding involves choice and decision making and it is dependent on the context; for example, evidence that is strong enough to justify a finding of fact in a civil case may not be strong enough to justify the same finding in a criminal case where the standard of proof is higher. It has been argued that the fact-finder has to base his findings not on what he believes but what he accepts (Cohen 1991, 1992: 117–125, Beltrán 2006; cf. Picinali 2013: 868–869). Belief and acceptance are propositional attitudes: they are different attitudes that one can have in relation to a proposition. As Cohen (1992: 4) explains:

to accept that p is to have or adopt a policy of deeming, positing or postulating that p —i.e. of including that proposition or rule among one’s premises for deciding what to do or think in a particular context.

Understanding standards of proof in terms of mathematical probabilities is controversial. It is said to raise a number of paradoxes (Cohen 1977; Allen 1986, 1991; Allen and Leiter 2001; Redmayne 2008). Let us return to our previous example. The defendant, Blue Bus Company, owns 75% of the buses in the town where the plaintiff was injured by a recklessly driven bus and the remaining 25% is owned by Red Bus Company. No other evidence is presented. Leaving aside the reference class problem discussed above, there is a 0.75 probability that the accident was caused by a bus owned by the defendant. On the probabilistic interpretation of the applicable standard of proof (that is, the balance of probabilities), the evidence should be sufficient to justify a verdict in the plaintiff’s favour. But most lawyers would agree that the evidence is insufficient. Another familiar hypothetical scenario is set in the criminal context (Nesson 1979: 1192–1193). Twenty five prisoners are exercising in a prison yard. Twenty four of them suddenly set upon a guard and kill him. The remaining prisoner refuses to participate. We cannot in the ensuing confusion identify the prisoner who refrained from the attack. Subsequently, one prisoner is selected randomly and prosecuted for the murder of the guard. Those are the only facts presented at the trial. The applicable standard is proof beyond a reasonable doubt. Assume that the probabilistic threshold of this standard is 0.95. On the statistical evidence, there is a probability of 0.96 that the defendant is criminally liable. [ 18 ] Despite the statistical probability of liability exceeding the threshold, it is widely agreed that the defendant must be acquitted. In both of the examples just described, why is the evidence insufficient and what does this say about legal standards of proof?

Various attempts have been made to find the answers (for surveys of these attempts, see Enoch and Fisher 2015: 565–571; Redmayne 2008, Ho 2008: 135–143, 168–170; Gardiner 2019b; section 6 of the entry on legal probabilism ). It has been argued that meeting a legal standard of proof is not merely or fundamentally a matter of adducing evidence to establish a mathematical probability of liability beyond a certain level. Standards of proof should be interpreted in epistemic rather than probabilistic terms. According to one interpretation, the evidence is sufficient to satisfy a standard of proof only if it is capable of justifying full or outright belief in the material facts that constitute legal liability and bare statistical evidence, as in our examples, cannot justify such a belief. (Nelkin 2021; Smith 2018; Buchak 2014; Ho 2008: 89–99.) On Smith’s account, the statistical evidence in our two examples fails to justify belief in the proposition that the defendant is liable because the evidence does not normically support that proposition. Evidence normically supports a proposition just in case the situation in which the evidence is true and the proposition is false is less normal, in the sense of requiring more explanation, than the situation in which the evidence and the proposition are both true. Where all that we have is statistical evidence, it could just so happen that the material proposition is false (it could just so happen that the accident-causing bus was red or that the accused was the one who refused to join in the murder), so no further explanation is needed where the proposition is false than where it is true (Smith 2018).

On a different epistemic interpretation, the evidence is sufficient to meet a legal standard of proof, and a finding of legal liability is permissible, only if the factfinder can gain knowledge of the defendant’s liability—to be precise, of the material facts establishing such liability—from the evidence (Duff et al. 2007: 87–91; Pardo 2010; for a critical overview of knowledge-centered accounts, see Gardiner forthcoming). High probability of liability alone will not suffice. On more subtle knowledge-centered theories, the standards of proof are met only if, on the available evidence, there is a sufficiently high probability that the fact finder knows that the defendant is liable (Littlejohn 2020 and 2021; Blome-Tillmann 2017), or only if the fact finder’s credence in the defendant’s liability exceeds the relevant legal threshold and the credence constitutes knowledge (Moss 2018). It is further claimed that the relevant knowledge necessary for a finding of liability cannot be obtained from statistical evidence alone (Littlejohn 2020 and 2021; Blome-Tillmann 2017; Moss 2018 and forthcoming). According to Thomson, this is because the statistical evidence (to take our first example, the 75% ownership of blue buses) is not causally connected with the fact sought to be proved and cannot guarantee the truth of the relevant belief (that the bus which caused the accident was blue) (Thomson 1986). An alternative argument is that knowledge requires the ruling out of all relevant alternatives and, to take our prison scenario, there is no evidence that addresses the possibility that the defendant was the one who refrained from joining in the attack or the possibility that the defendant is less likely to be guilty than an arbitrary prisoner in the yard. (See Moss forthcoming; Moss 2018: 213. Gardiner 2019a adapts the relevant alternatives framework to model legal standards of proof in a non-mathematical way while eschewing a knowledge account of those standards.) Another possible explanation for the failure to know relies on the notion of sensitivity. The belief that the defendant is liable is not sensitive to the truth where it is based on bare statistical evidence; in the bus example, evidence of the market share of buses remain the same whether it is true or not that a blue bus caused the accident (cf. Enoch, Spectre, and Fisher 2012; Enoch and Fisher 2015; Enoch and Spectre 2019 – while suggesting that the lack of knowledge has generally to do with the insensitivity of the belief, the authors deny that knowledge should matter to the imposition of legal liability). Yet another explanation is that it is unsafe to find a person liable on bare statistical evidence. Though safety is sometimes treated as a condition of knowledge (in that knowledge requires a true belief that is safe), one can treat safety as a condition for finding the defendant liable without also taking the position that the finding must be based on knowledge of liability. Safety is commonly understood in terms of whether a belief formed on the same basis would be true in close possible worlds. Roughly, a finding of liability is unsafe where it can easily be wrong in the sense that little in the actual world needs to change for it to be wrong. Whether the requirement of safety can explain why judgment should not entered against the defendant in our two hypothetical cases would depend on whether it can easily happen that the accident-causing bus was red or that the accused is innocent. (See Pritchard 2015 and 2018; Pardo 2018; cf. Gardiner 2020.) While theorizing of standards of proof in epistemic terms has gathered pace in recent years, it is criticised for relying on unrealistic hypotheticals that fail to attend to the actual operation of legal systems and for making impossible epistemological demands (Allen 2020).

There is another paradox in the mathematical interpretation of the standard of proof. This is the “conjunction paradox”. To succeed in a civil claim (or a criminal prosecution), the plaintiff (or the prosecution) will have to prove the material facts—or “elements”—that constitute the civil claim (or criminal charge) that is before the court (see discussion of “materiality” in section 2.2 above). Imagine a claim under the law of negligence that rests on two elements: a breach of duty of care by the defendant (element A ) and causation of harm to the plaintiff (element B ). To win the case, the plaintiff is legally required to prove A and B . For the sake of simplicity, let A and B be mutually independent events. Suppose the evidence establishes A to a probability of 0.6 and B to a probability of 0.7. On the mathematical interpretation of the civil standard of proof, the plaintiff should succeed in his claim since the probability with respect to each of the elements exceeds 0.5. However, according to the multiplication rule of conventional probability calculus, the probability that A and B are both true is the product of their respective probabilities; in this example, it is only 0.42 (obtained by multiplying 0.6 with 0.7). Thus, the overall probability is greater that the defendant deserves to win than that the plaintiff deserves to win, and yet the verdict is awarded in favour of the plaintiff.

One way of avoiding the conjunction paradox is to take the position that it should not be enough for each element to cross the probabilistic threshold; the plaintiff (or the prosecution) should win only if the probability of the plaintiff’s (or prosecution’s) case as a whole exceeds the applicable probabilistic threshold. So, in our example, the plaintiff should lose since the overall probability is below 0.5. But this suggested solution is unsatisfactory. The required level of overall probability would then turn on how many elements the civil claim or criminal charge happens to have. The greater the number of elements, the higher the level of probability to which, on average, each of them must be proved. This is thought to be arbitrary and hence objectionable. As two commentators noted, the legal definition of theft contains more elements than that for murder. Criminal law is not the same in all countries. We may take the following as a convenient approximation of what the law is in some countries: murder is (1) an act that caused the death of a person (2) that was done with the intention of causing the death, and to constitute theft, there must be (1) an intention to take property, (2) dishonesty in taking the property, (3) removal of the property from the possession of another person, and (4) lack of consent by that person. Since the offence of theft contains twice the number of elements as compared to murder, the individual elements for theft would have to be proved to a much higher level of probability (in order for the probability of their conjunction to cross the overall threshold) than the individual elements for the much more serious crime of murder (Allen and Leiter 2001: 1504–5). This is intuitively unacceptable.

Another proposal for resolving the conjunction paradox is move away from thinking of the standard of proof as a quantified threshold of absolute probability and to construe it, instead, as a probability ratio. The fact-finder has to compare the probability of the evidence adduced at the trial under the plaintiff’s theory of the case with the probability of the evidence under the defendant’s theory of the case (the two need not add to 1), and award the verdict to the side with a higher probability (Cheng 2013). One criticism of this interpretation of the standard of proof is that it ignores, and does not provide a basis for ignoring, the margin by which one probability exceeds the other, and the difference in probability may vary significantly for different elements of the case (Allen and Stein 2013: 598).

There is a deeper problem with the probabilistic conception of the standard of proof. There does not seem to be a satisfactory interpretation of probability that suits the forensic context. The only plausible candidate is the subjective meaning of probability according to which probability is construed as the strength of belief. The evidence is sufficient to satisfy the legal standard of proof on a disputed question of fact—for example, it is sufficient to justify the positive finding of fact that the accused killed the victim—only if the fact-finder, having considered the evidence, forms a sufficiently strong belief that the accused killed the victim. Guidance on how to process evidence and form beliefs can be found in a mathematical theorem known as Bayes’ theorem; it is the method by which an ideal rational fact-finder would revise or update his beliefs in the light of new evidence. [ 19 ] To return to our earlier hypothetical scenario, suppose the fact-finder initially believes the odds of the accused being guilty is 1:1 (“prior odds”) or, putting this differently, that there is a 0.5 probability of guilt. The fact-finder then receives evidence that blood of type A was found at the scene of the crime and that the accused has type A blood. Fifty percent of the population has this blood type. On the Bayesian approach, the posterior odds are calculated by multiplying the prior odds (1:1) by the likelihood ratio (which, as we saw in section 2.1.2 above, is 2:1). The fact-finder’s belief in the odds of guilt should now be revised to 2:1; the probability of guilt is now increased to 0.67 (Lempert 1977).

The subjectivist Bayesian theory of legal fact-finding has come under attack (see generally Amaya 2015: 82–93; Pardo 2013: 591). First, as we already saw in section 3.1 , ascertainment of the likelihood ratios is highly problematic. Secondly, the Bayesian theory is not sensitive to the weight of evidence which, roughly put, is the amount of evidence that is available. This criticism and the concept of weight are further explored in section 3.3 .

Thirdly, while the Bayesian theorem offers a method for updating probabilities in the light of new evidence, it is silent on what the initial probability should be. In a trial setting, the initial probability cannot be set at zero since this means certainty in the innocence of the accused. No new evidence can then make any difference; whatever the likelihood ratio of the evidence, multiplying it by zero (the prior probability) will still end up with a posterior probability of zero. On the other hand, starting with an initial probability is also problematic. This is especially so in a criminal case. To start a trial with some probability of guilt is to have the fact-finder harbouring some initial belief that the accused is guilty and this is not easy to reconcile with the presumption of innocence. (Tribe 1971: 1368–1372; cf. Posner 1999: 1514, suggesting starting the trial with prior odds of 50:50, criticized by Friedman 2000. The problem of fixing the prior probability is said to disappear if we base fact-finding simply on likelihood ratios: Sullivan, 2019: 45–59.)

Fourthly, we have thus far relied for ease of illustration on highly simplified—and therefore unrealistic—examples. In real cases, there are normally multiple and dependent items of evidence and the probabilities of all possible conjunctions of these items, which are numerous, will have to be computed. These computations are far too complex to be undertaken by human beings (Callen 1982: 10–15). The impossibility of complying with the Bayesian model undermines its prescriptive value.

Fifthly, according to Haack, the Bayesian theory has it the wrong way round. What matters is not the strength of the fact-finder’s belief itself. The standard of proof should be understood instead in terms of what it is reasonable for the fact-finder to believe in the light of the evidence presented, and this is a matter of the degree to which the belief is warranted by the evidence. Evidence is legally sufficient where it warrants the contested factual claim to the degree required by law. Whether a factual claim is warranted by the evidence turns on how strongly the evidence supports the claim, on how independently secure the evidence is, and on how much of the relevant evidence is available to the fact-finder (that is, the comprehensiveness of the evidence—see further discussion in section 3.3 below). Haack is against identifying degrees of warrant with mathematical probabilities. Degrees of warrant do not conform to the axioms of the standard probability calculus. For instance, where the evidence is weak, neither p nor not- p may be warranted; in contrast, the probability of p and the probability of not- p must add up to 1. Further, where the probability of p and the probability of q are both less than 1, the probability of p and q , being the product of the probability of p and the probability of q , is less than the probability of either. On the other hand, the degree of warrant for the conjunction of p and q may be higher than the warrant for either. [ 20 ] (See Haack 2004, 2008a,b, 2012, 2014 for the legal application of her general theory of epistemology. For her general theory of epistemology, see Haack 1993: ch. 4; Haack 2009: ch. 4; Haack 2003: ch. 3.)

Sixthly, research in experimental psychology suggests that fact-finders do not evaluate pieces of evidence one-by-one and in the unidirectional manner required under the mathematical model (Amaya 2015: 114–5). A holistic approach is taken instead where the discrete items of evidence are integrated into large cognitive structures (variously labelled as “mental models”, “stories”, “narratives” and “theories of the case”), and they are assessed globally against the legal definition of the crime or civil claim that is in dispute (Pennington and Hastie 1991, 1993; Pardo 2000). The reasoning does not progress linearly from evidence to a conclusion; it is bi-directional, going forward and backward: as the fact-finder’s consideration of the evidence inclines him towards a particular verdict, his leaning towards that conclusion will often produce a revision of his original perception and his assessment of the evidence (Simon 2004, 2011).

The holistic nature of evidential reasoning as revealed by these studies has inspired alternative theories that are of a non-mathematical nature. One alternative, already mentioned, is the “explanatory” or “relative plausibility” theory advanced by Allen together with Pardo and other collaborators (Allen 1986, 1991, 1994; Pardo 2000; Allen and Leiter 2001; Allen and Jehl 2003; Pardo and Allen 2008; Allen and Pardo 2019; cf. Nance 2001, Friedman 2001). [ 21 ] They contend that fact-finders do not reason in the fashion portrayed by the Bayesian model. Instead, they engage in generating explanations or hypotheses on the available evidence by a process of abductive reasoning or drawing “inferences to the best explanation”, and these competing explanations or hypotheses are compared in the light of the evidence. [ 22 ] The comparison is not of a hypothesis with the negation of that hypothesis, where the probability of a hypothesis is compared with the probability of its negation. Instead, the comparison is of one hypothesis with one or more particular alternative hypotheses as advocated by a party or as constructed by the fact-finder himself. On this approach, the plausibility of X, the factual account of the case that establishes the accused’s guilt or defendant’s liability, is compared with the plausibility of a hypothesis Y, a specific alternative account that points to the accused’s innocence or the defendant’s non-liability, and there may be more than one such specific alternative account.

On this theory, the evidence is sufficient to satisfy the preponderance of proof standard when the best-available hypothesis that explains the evidence and the underlying events include all of the elements of the claim. Thus, in a negligence case, the best-available hypothesis would have to include a breach of duty of care by the plaintiff and causation of harm to the defendant as these are the elements that must be proved to succeed in the legal claim. For the intermediate “clear-and-convincing” standard of proof, the best-available explanation must be substantially better than the alternatives. To establish the standard of proof beyond reasonable doubt, there must be a plausible explanation of the evidence that includes all of the elements of the crime and, in addition, there must be no plausible explanation that is consistent with innocence (Pardo and Allen 2008: 238–240; Pardo 2013: 603–604).

The relative plausibility theory itself is perceived to have a number of shortcomings. [ 23 ] First, the theory portrays the assessment of plausibility as an exercise of judgment that involves employment of various criteria such as coherence, consistency, simplicity, consilience, and more. However, the theory is sketchy on the meaning of plausibility and the criteria for evaluating plausibility are left largely unanalyzed. [ 24 ]

A second criticism of the relative plausibility theory is that, despite the purported utilisation of “inference to the best explanation” reasoning, the verdict is not controlled by the best explanation. For instance, even if the prosecution’s hypothesis is better than the defence’s hypothesis, neither may be very good. In these circumstances, the court must reject the prosecution’s hypothesis even though it is the best of alternatives (Laudan 2007). One suggested mitigation of this criticism is to place some demand on the epistemic effort that the trier of fact must take (for example, by being sufficiently diligent and thorough) in constructing the set of hypotheses from which the best is to be chosen (Amaya 2009: 155).

The third criticism is targeted at holistic theories of evidential reasoning in general and not specifically at the relative plausibility theory. While it may be descriptively true that fact-finders decide verdicts by holistic evaluation of the plausibility of competing explanations, hypotheses, narratives or factual theories that are generated from the evidence, such forms of reasoning may conceal bias and prejudice that stand greater chances of exposure under a systematic approach such as Bayesian analysis (Twining 2006: 319; Simon 2004, 2011; Griffin 2013). A hypothesis constructed by the fact-finder may be shaped subconsciously by a prejudicial generalisation or background belief about the accused based on a certain feature, say, his race or sexual history. Individuating this feature and subjecting it to Bayesian scrutiny has the desirable effect of putting the generalisation or background belief under the spotlight and forcing the fact-finder to confront the problem of prejudice.

A third idea of evidential weight is prompted by this insight from Keynes (1921: 71):

As the relevant evidence at our disposal increases, the magnitude of the probability of the argument may either decrease or increase, according as the new knowledge strengthens the unfavourable or the favourable evidence; but something seems to have increased in either case,—we have a more substantial basis upon which to rest our conclusion. I express this by saying that an accession of new evidence increases the weight of an argument. New evidence will sometimes decrease the probability of an argument, but it will always increase its “weight”.

This idea of evidential weight has been applied by some legal scholars in assessing the sufficiency of evidence in satisfying legal standards of proof. [ 25 ] At its simplest, we may think of weight in the context of legal fact-finding as the amount of evidence before the court. Weight is distinguishable from probability. The weight of evidence may be high and the mathematical probability low, as in the situation where the prosecution adduces a great deal of evidence tending to incriminate the accused but the defence has an unshakeable alibi (Cohen 1986: 641). Conversely, the state of evidence adduced in a case might establish a sufficient degree of probability—high enough to cross the supposed threshold of proof on the mathematical conception of the standard of proof—and yet lack adequate weight. In the much-discussed gate-crasher’s paradox, the only available evidence shows that the defendant was one of a thousand spectators at a rodeo show and that only four hundred and ninety nine tickets were issued. The defendant is sued by the show organiser for gate-crashing. The mathematical probability that the defendant was a gate-crasher is 0.501 and this meets the probabilistic threshold for civil liability. But, according to the negation principle of mathematical probability, there is probability of 0.499 that the defendant did pay for his entrance. In these circumstances, it is intuitively unjust to find him liable (Cohen 1977: 75). A possible explanation for not finding him liable is that the evidence is too flimsy or of insufficient weight.

Proponents of the mathematical conception of the standard of proof have stood their ground even while acknowledging that weight has a role to play in the Bayesian analysis of probative value and the sufficiency of evidence. If a party does not produce relevant evidence that is in his possession, resulting in the court facing an evidential deficiency, it may draw an adverse inference against him when computing the posterior probability (Kaye 1986b: 667; Friedman 1997). One criticism of this approach is that, in the absence of information about the missing evidence, the drawing of the adverse inference is open to the objection of arbitrariness (Nance 2008: 274). A further objection is that the management of parties’ conduct relating to evidence preservation and presentation should be left to judges and not to the jury. What a judge may do to optimize evidential weight is to impose a burden of producing evidence on a party and to make the party suffer an adverse finding of fact if he fails to produce the evidence. This will serve as an incentive for the party to act in a manner that promotes the interest in evidential completeness (Nance 2008, 2010, 2016).

Cohen suggests that the standard of proof should be conceived entirely as a matter of evidential weight which, on his theory, is a matter of the number of tests or challenges to which a factual hypothesis is subjected to in court. He offers an account of legal fact-finding in terms of an account of inductive probability that was inspired by the work of writers such as Francis Bacon and J.S. Mill. Inductive probability operates differently from the classical calculus of probability. It is based on inductive support for the common-sense generalisation that licences the drawing of the relevant inference. Inductive support for a generalisation is graded according to the number of tests that it has passed, or, putting this in another way, by the degree of its resistance to falsification by relevant variables. The inductive probability of an argument is equal to the reliability grade of the inductive support for the generalisation which covers the argument.

Proof beyond reasonable doubt represents the maximum level of inductive probability. The prosecution may try to persuade the court to infer that the accused was guilty of burglary by producing evidence to establish that he was found in the vicinity of the victim’s house late at night with the stolen object on him. This inference is licensed by the generalisation that normally if a stranger is found immediately after a burglary in possession of the stolen object, he intentionally removed it himself. The defence may try to defeat the inference by showing that the generalization does not apply in the particular case, for example, by presenting evidence to show that the accused had found the object on the street. The prosecution’s hypothesis is now challenged or put to the test. As a counter-move, it may produce evidence to establish that the object could not have been lying in the street as alleged. If the generalisations on which the prosecution’s case rest survive challenges by the defence at every possible point, then guilt is proved beyond reasonable doubt. [ 26 ] The same reasoning structure applies in the civil context except that in a civil case, the plaintiff succeeds in proof on the preponderance of evidence so long as the conclusion to be proved by him is more inductively probable than its negation. (Cohen 1977, 1986; cf. Schum 1979.) [ 27 ]

Cohen’s theory seems to require that each test to which a hypothesis is put can be unequivocally and objectively resolved. But usually this is not the case. In our example, we may not be entirely convinced that the accused found or did not find the object on the street, and our evaluation would involve the exercise of judgment that is no less subjective as the sort of judgments required when applying the standard probabilistic conception of proof (Nance 2008: 275–6; Schum 1994: 261).

  • Abimbola, A., 2001, “Abductive Reasoning in Law: Taxonomy and Inference to the Best Explanation”, Cardozo Law Review , 22: 1683–1689.
  • Aitken, C., P. Roberts, and G. Jackson, 2010, Fundamentals of Probability and Statistical Evidence in Criminal Proceedings: Guidance for Judges, Lawyers, Forensic Scientists and Expert Witnessess , London: Royal Statistical Society. [ Aitken, Roberts, and Jackson 2010 available online ]
  • Allen, R., 1986, “A Reconceptualization of Civil Trials”, Boston University Law Review , 66: 401–437.
  • –––, 1991, “The Nature of Juridical Proof”, Cardozo Law Review , 13: 373–422.
  • –––, 1992, “The Myth of Conditional Relevancy”, Loyola of Los Angeles Law Review , 25: 871–884.
  • –––, 1994, “Factual Ambiguity and a Theory of Evidence”, Northwestern University Law Review , 88: 604–640.
  • –––, 2020, “Naturalized Epistemology and the Law of Evidence Revisited”, Quaestio Facti: International Journal on Evidential Reasoning , 2: 1–32.
  • Allen, R. and S. Jehl, 2003, “Burdens of Persuasion in Civil Cases: Algorithms v. Explanations”, Michigan State Law Review , 4: 893–944.
  • Allen, R. and B. Leiter, 2001, “Naturalized Epistemology and the Law of Evidence”, Virginia Law Review , 87: 1491–1550.
  • Allen, R. and M. Pardo, 2007a, “The Problematic Value of Mathematical Models of Evidence”, Journal of Legal Studies , 36: 107–140.
  • –––, 2007b, “Probability, Explanation and Inference: a Reply”, International Journal of Evidence and Proof , 11: 307–317.
  • –––, 2019, “Relative Plausibility and its Critics”, International Journal of Evidence and Proof , 23: 5–59.
  • Allen, R. and P. Roberts (eds.), 2007, International Journal of Evidence and Proof (Special Issue on the Reference Class Problem) , vol. 11, no.4.
  • Allen, R. and A. Stein, 2013, “Evidence, Probability and the Burden of Proof”, Arizona Law Review , 55: 557–602.
  • Amaya, A., 2008, “Justification, Coherence, and Epistemic Responsibility in Legal Fact-finding”, Episteme , 5: 306–319.
  • –––, 2009, “Inference to the Best Explanation”, in Legal Evidence and Proof: Statistics, Stories and Logic , H. Kaptein, H. Prakken, and B. Verheij (eds.), Burlington: Ashgate, pp. 135–159.
  • –––, 2011, “Legal Justification by Optimal Coherence”, Ratio Juris , 24: 304–329.
  • –––, 2013, “Coherence, Evidence, and Legal Proof”, Legal Theory , 19: 1–43.
  • –––, 2015, The Tapestry of Reason: An Inquiry into the Nature of Coherence and its Role in Legal Argument , Oxford: Hart and Portland.
  • Anderson, T., D. Schum, and W. Twining, 2009, Analysis of Evidence , Cambridge: Cambridge University Press, 3 rd edition.
  • Ball, V., 1980, “The Myth of Conditional Relevancy”, Georgia Law Review , 14: 435–469.
  • Bartels, R., 1981, “Punishment and the Burden of Proof in Criminal Cases: A Modest Proposal”, Iowa Law Review , 66: 899–930.
  • Beltrán, J., 2006, “Legal Proof and Fact Finders’ Beliefs”, Legal Theory , 12: 293–314.
  • Bentham, J., 1825, A Treatise on Judicial Evidence , M. Dumont (ed.), London: Paget.
  • –––, 1827, Rationale of Judicial Evidence, Specially Applied to English Practice , J. Mill (ed.), London: Hunt and Clarke.
  • Blackstone, W., 1770, Commentaries on the Laws of England , vol. 4, Dublin.
  • Blome-Tillmann, M., 2017, “‘More Likely Than Not’ – Knowledge First and the Role of Bare Statistical Evidence in Courts of Law”, in Knowledge First: Approaches in Epistemology and Mind , J. Carter, E. Gordon, and B. Jarvis (eds.), Oxford: Oxford University Press, pp. 278–292.
  • Buchak, L., 2014, “Belief, Credence, and Norms”, Philosophical Studies , 169: 285–311.
  • Callen, C., 1982, “Notes on a Grand Illusion: Some Limits on the Use of Bayesian Theory in Evidence Law”, Indiana Law Journal , 57: 1–44.
  • Cheng, E., 2009, “A Practical Solution to the Reference Class Problem”, Columbia Law Review , 109: 2081–2105.
  • –––, 2013, “Reconceptualising the Burden of Proof”, Yale Law Journal , 122: 1254–1279.
  • Cohen, L., 1977, The Probable and the Provable , Oxford: Oxford University Press.
  • –––, 1986, “The Role of Evidential Weight in Criminal Proof”, Boston University Law Review , 66: 635–649.
  • –––, 1991, “Should a Jury Say What It Believes or What It Accepts?”, Cardozo Law Review , 13: 465–483.
  • –––, 1992, An Essay on Belief and Acceptance , Oxford: Clarendon Press.
  • Colyvan, M., H. Regan, and S. Ferson, 2001, “Is it a Crime to Belong to a Reference Class?”, Journal of Political Philosophy , 9: 168–181.
  • Cullison, A., 1969, “Probability Analysis of Judicial Fact-finding: A Preliminary Outline of the Subjective Approach”, Toledo Law Review , 1: 538–598.
  • Damaška, M., 1973, “Evidentiary Barriers to Conviction and Two Models of Criminal Procedure: A Comparative Study”, University of Pennsylvania Law Review , 121: 506–589.
  • –––, 1975, “Presentation of Evidence and Factfinding Precision”, University of Pennsylvania Law Review , 123: 1083–1105.
  • –––, 1992, “Of Hearsay and Its Analogues”, Minnesota Law Review , 76: 425–458.
  • –––, 1994, “Propensity Evidence in Continental Legal Systems”, Chicago Kent Law Review , 70: 55–67.
  • –––, 1997, Evidence Law Adrift , New Haven: Yale University Press.
  • –––, 2006, “The Jury and the Law of Evidence: Real and Imagined Interconnections”, Law, Probability and Risk , 5: 255–265.
  • –––, 2019, Evaluation of Evidence: Pre-modern and Modern Approaches , Cambridge: Cambridge University Press.
  • Davis, D. and W. Follette, 2002, “Rethinking the Probative Value of Evidence: Base Rates, Intuitive Profiling and the ‘ Post diction’ of Behavior”, Law and Human Behavior , 26: 133–158.
  • –––, 2003, “Toward an Empirical Approach to Evidentiary Ruling”, 27 Law and Human Behavior , 27: 661–684.
  • Dawid, P., W. Twining, and M. Vasilaki, 2011, Evidence, Inference and Enquiry , Oxford: Oxford University Press for the British Academy.
  • Duff, A., et al., 2007, The Trial on Trial (Volume 3: Towards a Normative Theory of the Criminal Trial), Oxford: Hart.
  • Dworkin, R., 1981, “Principle, Policy, Procedure”, in Crime, Proof and Punishment, Essays in Memory of Sir Rupert Cross , C. Tapper (ed.), London: Butterworths, pp. 193–225.
  • Eggleston, R., 1983, Evidence, Probability and Proof , London: Weidenfeld & Nicolson, 2 nd edition.
  • Enoch, D., L. Spectre, and T. Fisher, 2012, “Statistical Evidence, Sensitivity, and the Legal Value of Knowledge”, Philosophy and Public Affairs , 40(3): 197–224.
  • Enoch, D. and L. Spectre, 2019, “Sensitivity, Safety, and the Law: a Reply to Pardo”, Legal Theory , 25: 178–199.
  • Enoch, D. and T. Fisher, 2015, “Sense and ‘Sensitivity’: Epistemic and Instrumental Approaches to Statistical Evidence”, Stanford Law Review , 67: 557–611.
  • Finkelstein, M. and B. Levin, 2003, “On the Probative Value of Evidence from a Screening Search”, Jurimetrics , 43: 265–290.
  • Franklin, J., 2010, “Feature Selection Methods for Solving the Reference Class Problem: Comment on Edward K. Cheng, ‘A Practical Solution to the Reference Class Problem’”, Columbia Law Review Sidebar , 110: 12–23.
  • –––, 2011, “The Objective Bayesian Conceptualisation of Proof and Reference Class Problems”, Sydney Law Review , 33: 545–561.
  • –––, 2012, “Discussion Paper: How much of Commonsense and Legal Reasoning is Formalizable? A Review of Conceptual Obstacles”, Law, Probability and Risk , 11: 225–245.
  • Friedman, R., 1986, “A Close Look at Probative Value”, Boston University Law Review , 33: 733–759.
  • –––, 1994, “Conditional Probative Value: Neoclassicism Without Myth”, Michigan Law Review , 93:439–484.
  • –––, 1997, “Dealing with Evidential Deficiency”, Cardozo Law Review , 18: 1961–1986.
  • –––, 2000, “A Presumption of Innocence, Not of Even Odds”, Stanford Law Review , 52:873–887.
  • –––, 2001, “‘E’ is for Eclectic: Multiple Perspectives on Evidence”, Virginia Law Review , 87: 2029–2054.
  • Friedman, R. and R. Park, 2003, “Sometimes What Everybody Thinks They Know Is True”, Law and Human Behavior , 27: 629–644.
  • Gardiner, G., 2019a, “The Reasonable and the Relevant: Legal Standards of Proof”, Philosophy and Public Affairs , 47: 288–318.
  • –––, 2019b, “Legal Burdens of Proof and Statistical Evidence”, in The Routledge Handbook of Applied Epistemology , in D. Coady and J. Chase (eds.), Oxford: Routledge.
  • –––, 2020, “Profiling and Proof: Are Statistics Safe?”, Philosophy , 95: 161–183.
  • –––, forthcoming, “Legal Evidence and Knowledge”, in M. Lasonen-Aarnio and C. Littlejohn (eds.), The Routledge Handbook of the Philosophy of Evidence , Oxford: Routledge.
  • Goldman, A., 1999, Knowledge in a Social World , Oxford: Oxford University Press.
  • –––, 2002, “Quasi-Objective Bayesianism and Legal Evidence”, Jurimetrics , 42: 237–260.
  • –––, 2005, “Legal Evidence” in The Blackwell Guide to the Philosophy of Law and Legal Theory , M. Goldring and W. Edmundson (eds.), Malden, MA: Blackwell, pp. 163–175.
  • Griffin, L., 2013, “Narrative, Truth, and Trial”, Georgetown Law Journal , 101: 281–335.
  • Haack, S., 1993, Evidence and Inquiry, Towards Reconstruction in Epistemology , Oxford: Blackwell.
  • –––, 2003, “Clues to the Puzzle of Scientific Evidence: a More-So Story” in S. Haack, Defending Science: Within Reasons , New York: Prometheus, pp. 57–91.
  • –––, 2004, “Epistemology Legalized: or, Truth, Justice and the American Way”, American Journal of Jurisprudence , 49: 43–61.
  • –––, 2008a, “Proving Causation: The Holism of Warrant and the Atomism of Daubert ”, Journal of Health and Biomedical Law , 4: 253–289.
  • –––, 2008b, “Warrant, Causation, and the Atomism of Evidence Law”, Episteme , 5: 253–266.
  • –––, 2009, Evidence and Inquiry: A Pragmatist Reconstruction of Epistemology , New York: Prometheus (expanded edition of Haack 1993).
  • –––, 2012, “The Embedded Epistemologist: Dispatches from the Legal Front”, Ratio Juris , 25: 206–235.
  • –––, 2014, “Legal Probabilism: An Epistemological Dissent” in S. Haack, Evidence Matters: Science, Proof, and Truth in the Law , Cambridge: Cambridge University Press, pp. 47–77.
  • Ho, H.L., 2003–2004, “The Legitimacy of Medieval Proof”, Journal of Law and Religion , 19: 259–298.
  • –––, 2008, A Philosophy of Evidence Law: Justice in the Search for Truth , Oxford: Oxford University Press.
  • Jackson, J. and S. Doran, 2010, “Evidence” in A Companion to Philosophy of Law and Legal Theory , 2 nd edition, D. Patterson (ed.), Malden, MA : Wiley-Blackwell, pp. 177–187.
  • James, G., 1941, “Relevancy, Probability and the Law”, California Law Review , 29: 689–705.
  • Josephson, J., 2001, “On the Proof Dynamics of Inference to the Best Explanation”, Cardozo Law Review 22: 1621–1643.
  • Kaplan, J., 1968, “Decision Theory and the Fact-finding Process”, Stanford Law Review , 20: 1065–1092.
  • Kaplow, L., 2012, “Burden of Proof”, Yale Law Journal , 121: 738–859.
  • Kaye, D., 1986a, “Quantifying Probative Value”, Boston University Law Review , 66: 761–766.
  • –––, 1986b, “Do We Need a Calculus of Weight to Understand Proof Beyond Reasonable Doubt?”, Boston University Law Review , 66: 657–672.
  • Kaye, D. and J. Koehler, 2003, “The Misquantification of Probative Value”, Law and Human Behavior , 27: 645–659.
  • Keynes, J., 1921, A Treatise on Probability , London: MacMillan.
  • Laudan, L., 2006, Truth, Error, and Criminal Law: An Essay in Legal Epistemology , Cambridge: Cambridge University Press.
  • –––, 2007, “Strange Bedfellows: Inference to the Best Explanation and the Criminal Standard of Proof”, International Journal of Evidence and Proof , 11: 292–306.
  • Laudan, L. and H. Saunders, 2009, “Re-Thinking the Criminal Standard of Proof: Seeking Consensus about the Utilities of Trial Outcomes”, International Commentary on Evidence , 7(2), article 1 (online journal).
  • Lawson, G., 2017, Evidence of the Law: Proving Legal Claims , Chicago: University of Chicago Press.
  • Leiter, B., 1997, “Why Even Good Philosophy of Science Would Not Make for Good Philosophy of Evidence”, Brigham Young University Law Review , 803–819.
  • Lempert, R., 1977, “Modeling Relevance”, Michigan Law Review , 75: 1021–1057.
  • Lillquist, E., 2002, “Recasting Reasonable Doubt: Decision Theory and the Virtues of Variability”, University of California Davies Law Review , 36: 85–197.
  • Littlejohn, C., 2020, “Truth, Knowledge, and the Standard of Proof in Criminal Law”, Synthese , 197: 5253–5286.
  • –––, 2021, “Justified Belief and Just Conviction” in The Social Epistemology of Legal Trials , Z. Hoskins and J. Robson (eds.), New York: Routledge, pp. 106–123.
  • MacCrimmon, M., 2001–2002, “What is ‘Common’ about Common Sense?: Cautionary Tales for Travelers Crossing Disciplinary Boundaries”, Cardozo Law Review , 22: 1433–1460.
  • McCormick, C., 2013, McCormick on Evidence , K. Broun et al. (eds.), St. Paul, Minnesota: Thomson Reuters/WestLaw, 7 th edition.
  • McNamara, P., 1986, “The Canons of Evidence: Rules of Exclusion or Rules of Use?”, Adelaide Law Review , 10: 341–364.
  • Mnookin, J., 2006, “Bifurcation and the Law of Evidence”, University of Pennsylvania Law Review PENNumbra , 155: 134–145.
  • –––, 2013, “Atomism, Holism, and the Judicial Assessment of Evidence”, University of California at Los Angeles Law Review , 60: 1524–1585.
  • Montrose, J., 1954, “Basic Concepts of the Law of Evidence”, Law Quarterly Review , 70: 527–555.
  • Morgan, E., 1929, “Functions of Judge and Jury in the Determination of Preliminary Questions of Fact”, Harvard Law Review , 43: 165–191.
  • –––, 1936–37, “The Jury and the Exclusionary Rules of Evidence”, University of Chicago Law Review , 4: 247–258.
  • Moss, S., 2018, Probabilistic Knowledge , Oxford: Oxford University Press.
  • –––, forthcoming, “Knowledge and Legal Proof” in Oxford Studies in Epistemology (Volume 7), T. Gendler and J. Hawthorne (eds.), Oxford: Oxford University Press.
  • Nance, D., 1988, “The Best Evidence Principle”, Iowa Law Review , 73: 227–297.
  • –––, 1990, “Conditional Relevance Reinterpreted”, Boston University Law Review , 70: 447–507.
  • –––, 2001, “Naturalized Epistemology and the Critique of Evidence Theory”, Virginia Law Review , 87: 1551–1618.
  • –––, 2007a, “Allocating the Risk of Error”, Legal Theory , 13: 129–164.
  • –––, 2007b, “The Reference Class Problem and Mathematical Models of Inference”, International Journal of Evidence and Proof , 11: 259–273.
  • –––, 2008, “The Weights of Evidence”, Episteme , 5: 267–281.
  • –––, 2010, “Adverse Inferences About Adverse Inferences: Restructuring Juridical Roles for Responding to Evidence Tampering by Parties to Litigation”, Boston University Law Review , 90: 1089–1146.
  • –––, 2016, The Burdens of Proof – Discriminatory Power, Weight of Evidence and Tenacity of Belief , Cambridge: Cambridge University Press.
  • Nance, D. and S. Morris, 2002, “An Empirical Assessment of Presentation Formats for Trace Evidence with a Relatively Large and Quantifiable Random Match Probability”, Jurimetrics , 42: 403–447.
  • Nelkin, D., 2021, “Rational Belief and Statistical Evidence — Blame, Bias and the Law” in Lotteries, Knowledge, and Rational Belief , I. Douven (ed.), Cambridge: Cambridge University Press, pp. 6–27.
  • Nesson, C., 1979, “Reasonable Doubt and Permissive Inferences: the Value of Complexity”, Harvard Law Review , 92: 1187–1225.
  • –––, 1985, “The Evidence or the Event? On Judicial Proof and the Acceptability of Verdicts”, Harvard Law Review , 98: 1357–1392.
  • Pardo, M., 2000, “Juridical Proof, Evidence, and Pragmatic Meaning: Toward Evidentiary Holism”, Northwestern University Law Review , 95: 399–442.
  • –––, 2005, “The Field of Evidence and the Field of Knowledge”, Law and Philosophy , 24: 321–392.
  • –––, 2007, “The Political Morality of Evidence Law”, International Commentary on Evidence , 5(2), essay 1 (online journal).
  • –––, 2010, “The Gettier Problem and Legal Proof”, Legal Theory , 16: 37–57.
  • –––, 2013, “The Nature and Purpose of Evidence Theory”, Vanderbilt Law Review , 66: 547–613.
  • –––, 2018, “Safety vs. Sensitivity: Possible Worlds and the Law of Evidence”, Legal Theory , 24: 50–75.
  • Pardo, M.S. and R.J. Allen, 2008, “Juridical Proof and the Best Explanation”, Law and Philosophy , 27: 223–268.
  • Park, R., 1986, “The Hearsay Rule and the Stability of Verdicts: A Response to Professor Nesson”, Minnesota Law Review , 70: 1057–1072.
  • Park, R. et al., 2010, “Bayes Wars Redivivus: An Exchange”, International Commentary on Evidence , 8(1), article 1 (online journal).
  • Pattenden, R., 1996–7, “The Discretionary Exclusion of Relevant Evidence in English Civil Proceedings”, International Journal of Evidence and Proof , 1: 361–385.
  • Pennington, N. and R. Hastie, 1991, “A Cognitive Model of Juror Decision Making: The Story Model”, Cardozo Law Review , 13: 519–557.
  • –––, 1993, “The Story Model for Juror Decision-making” in Inside the Juror: The Psychology of Juror Decision Making , R. Hastie (ed.), Cambridge: Cambridge University Press, pp. 192–221.
  • Picinali, F., 2013, “Two Meanings of ‘Reasonableness’: Dispelling the ‘Floating’ Reasonable Doubt”, Modern Law Review , 76: 845–875.
  • Pollock, F., 1876, “Stephen’s Digest of the Law of Evidence”, The Forthnightly Review , 20: 383–394.
  • –––, 1899, “Review of A Preliminary Treatise on Evidence at the Common Law by James Bradley Thayer, Law Quarterly Review ”, 15: 86–87.
  • Posner, R., 1999, “An Economic Approach to the Law of Evidence”, Stanford Law Review , 51: 1477–1546.
  • Pritchard, D., 2015, “Risk”, Metaphilosophy , 46: 436–461.
  • –––, 2018, “Legal Risk, Legal Evidence and the Arithmetic of Criminal Justice”, Jurisprudence , 9: 108–119.
  • Rescher, N. and C. Joynt, 1959, “Evidence in History and in the Law”, Journal of Philosophy , 56: 561–578.
  • Redmayne, M., 1996, “Standards of Proof in Civil Litigation”, Modern Law Review , 62: 167–195.
  • –––, 2006, “The Structure of Evidence Law”, Oxford Journal of Legal Studies , 26: 805–822.
  • –––, 2008, “Exploring the Proof Paradoxes”, Legal Theory , 14: 281–309.
  • Ribeiro, G., 2019, “The Case for Varying Standards of Proof”, San Diego Law Review , 56: 161–219.
  • Roberts, P. and C. Aitken, 2014, The Logic of Forensic Proof: Inferential Reasoning in Criminal Evidence and Forensic Science , London: Royal Statistical Society. [ Roberts and Aitken 2014 available online ]
  • Roberts, P. and A. Zuckerman, 2010, Criminal Evidence , Oxford: Oxford University Press, 2 nd edition.
  • Robertson, B. and G. Vignaux, 1995, Interpreting Evidence: Evaluating Forensic Science in the Courtroom , Chichester: John Wiley.
  • Schauer, F., 2006, “On the Supposed Jury-Dependence of Evidence Law”, University of Pennsylvania Law Review , 155: 165–202.
  • –––, 2008, “In Defense of Rule-Based Evidence Law: And Epistemology Too”, Episteme 5: 295–305.
  • Schum, D., 1979, “A Review of a Case Against Blaise Pascal and His Heirs”, Michigan Law Review , 77:446–483.
  • –––, 1994, The Evidential Foundations of Probabilistic Reasoning , New York: John Wiley & Sons.
  • –––, 1998, “Legal Evidence and Inference” in Routledge Encyclopedia of Philosophy , E. Craig (ed.), London: Routledge, pp. 500–506.
  • –––, 2001, “Species of Abductive Reasoning in Fact Investigation in Law”, Cardozo Law Review , 22:1645–1681.
  • Simon, D., 2004, “A Third View of the Black Box: Cognitive Coherence in Legal Decision Making”, University of Chicago Law Review , 71: 511–586.
  • –––, 2011, “Limited Diagnosticity of Criminal Trials”, Vanderbilt Law Review , 64: 143–223.
  • Smith, M., 2018, “When Does Evidence Suffice for Conviction?”, Mind , 127: 1193–1218.
  • Stein, A., 2005, Foundations of Evidence Law , Oxford: Oxford University Press.
  • Stephen, J., 1872, The Indian Evidence Act, with an Introduction on the Principles of Judicial Evidence , Calcutta: Thacker, Spink & Co.
  • –––, 1886, A Digest of the Law of Evidence , London: William Clowes & Sons, 5 th edition.
  • Sullivan, S., 2019, “A Likelihood Story: The Theory of Legal Fact-finding”, University of Colorado Law Review , 90: 1–66.
  • Thayer, J., 1898, A Preliminary Treatise on Evidence at the Common Law , Boston: Little, Brown & Co.
  • Thomson, J., 1986, “Liability and Individualized Evidence”, Law and Contemporary Problems , 49(3): 199–219.
  • Tillers, P., 2005, “If Wishes were Horses: Discursive Comments on Attempts to Prevent Individuals from Being Unfairly Burdened by their Reference Classes”, Law, Probability and Risk , 4: 33–39.
  • –––, 2008, “Are there Universal Principles or Forms of Evidential Inference? Of Inference Networks and Onto-Epistemology” in Crime, Procedure and Evidence in a Comparative and International Context , J. Jackson, M. Langer, and P. Tillers (eds.), Oxford: Hart, pp. 179–198.
  • Tillers, P. and E. Green (eds.), 1988, Probability and Inference in the Law of Evidence: The Limits and Uses of Bayesianism , Dordrecht: Kluwer.
  • Trautman, H., 1952, “Logical or Legal Relevancy: A Conflict in Theory”, Vanderbilt Law Review , 5: 385–413.
  • Tribe, L., 1971, “Trial by Mathematics: Precision and Ritual in the Legal Process”, Harvard Law Review , 84: 1329–1393
  • Twining, W., 1985, Theories of Evidence: Bentham and Wigmore , London: Weidenfeld and Nicolson.
  • –––, 2006, Rethinking Evidence: Exploratory Essays , Cambridge: Cambridge University Press, 2 nd edition.
  • Twining, W. and I. Hampsher-Monk, 2003, Evidence and Inference in History and Law: Interdisciplinary Dialogues , Evanston, Illinois: Northwestern University Press.
  • Whitworth, G., 1881, The Theory of Relevancy for the Purpose of Judicial Evidence , Bombay: Thacker & Co.
  • Wigmore, J., 1913, “Review of A Treatise on Facts, or the Weight and Value of Evidence by Charles C. Moore”, Illinois Law Review , 3: 477–478.
  • –––, 1935, A Students’ Textbook of the Law of Evidence , Brooklyn: Foundation Press.
  • –––, 1937, Science of Judicial Proof, as Given by Logic, Psychology, and General Experience and Illustrated in Judicial Trials , Boston: Little, Brown and Co.
  • –––, 1983a, Evidence in Trials at Common Law , vol. 1, P. Tillers (ed.), Boston: Little, Brown and Co.
  • –––, 1983b, Evidence in Trials at Common Law , vol. 1A, P. Tillers (ed.), Boston: Little, Brown and Co.
  • Wills, W., 1852, An Essay on the Principles of Circumstantial Evidence , Philadelphia: T & J W Johnson, reprint from the third London edition.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Legal Information Institute , at Cornell Law School. This site makes available the full text of the Federal Rules of Evidence with commentaries by the Advisory Committee on Rules.
  • Statistics and the Law , page at the Royal Statistical Society.

Bayes’ Theorem | epistemology | evidence | legal probabilism | probability, interpretations of

Copyright © 2021 by Hock Lai Ho < lawhohl @ nus . edu . sg >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

  • Skip to main content
  • Skip to primary sidebar

Criminal Justice

IResearchNet

Academic Writing Services

Use of forensic evidence in trial.

The article explores the pivotal role of forensic evidence in the US criminal justice process , shedding light on its varied forms and indispensable contributions to criminal trials. Beginning with a comprehensive introduction, the discussion defines forensic evidence and underscores its significance. The first section delves into the intricacies of DNA evidence, tracing its historical evolution, contemporary relevance, and the legal standards dictating its admissibility. The subsequent segment focuses on fingerprint analysis, elucidating the principles, challenges, and courtroom admissibility. The third section investigates trace evidence, detailing its definition, analysis techniques, and substantiating its importance through pertinent case studies. A critical examination of the expert witnesses who present forensic evidence follows, emphasizing their qualifications, roles, and the legal standards governing their testimony. The challenges and controversies surrounding forensic evidence are scrutinized in the fourth section, encompassing reliability issues, emerging technologies, and critiques from legal scholars. The conclusion consolidates key insights, reaffirming the indispensable nature of forensic evidence in trials while addressing the need for ongoing improvements and reforms in this ever-evolving field.

Introduction

Forensic evidence stands as a cornerstone within the intricate tapestry of the United States criminal justice system, playing a pivotal role in unraveling the complexities of criminal trials. At its essence, forensic evidence refers to the application of scientific methods and techniques to the investigation and resolution of legal issues, particularly those related to criminal activities. Its significance cannot be overstated, as it serves as a powerful tool for establishing facts, connecting individuals to crimes, and ultimately contributing to the dispensation of justice. This article embarks on an in-depth exploration of forensic evidence within the context of criminal trials, aiming to provide a comprehensive understanding of its various forms, applications, and the profound impact it has on the adjudication of legal matters.

Forensic evidence encompasses a wide array of scientific disciplines, including but not limited to DNA analysis, fingerprint examination, ballistics, toxicology, and trace evidence analysis. Each branch of forensic science contributes uniquely to the investigative process, employing specialized methodologies to extract information from physical materials found at crime scenes. The meticulous examination of biological samples, physical impressions, and chemical substances enables forensic experts to draw connections between individuals, objects, and events, facilitating the reconstruction of the circumstances surrounding a crime. This definition underscores the interdisciplinary nature of forensic evidence, emphasizing its reliance on scientific principles to uncover truths that may otherwise remain concealed within the complexities of criminal cases.

The importance of forensic evidence in criminal trials cannot be overstated, as it serves as a linchpin in establishing guilt or innocence, corroborating or challenging witness testimony, and providing a factual foundation upon which legal arguments are built. In an era where the criminal justice system strives for accuracy and fairness, forensic evidence acts as a reliable and objective source of information, contributing to the pursuit of truth within the confines of the courtroom. The ability of forensic evidence to withstand scrutiny and provide empirical support for legal arguments enhances its credibility, making it an invaluable component of the justice-seeking process. Its role extends beyond mere conviction or exoneration; it plays a vital role in ensuring the integrity of the judicial system and bolstering public trust in the pursuit of justice.

Within the broader context of the criminal justice process, forensic evidence operates at multiple stages, from the initial crime scene investigation to the presentation of findings in the courtroom. Crime scene technicians and forensic experts collaborate to collect, preserve, and analyze evidence, adhering to rigorous protocols to maintain its integrity. Subsequently, the results of forensic analyses become essential components of the investigative dossier presented to prosecutors, defense attorneys, and, ultimately, the courts. The admissibility and weight given to forensic evidence vary, necessitating a nuanced understanding of legal standards and scientific principles. This article will delve into the specific types of forensic evidence, their methodologies, and the challenges associated with their presentation in court, providing a holistic view of the intricate interplay between science and law within the criminal justice system.

Types of Forensic Evidence

Forensic evidence encompasses a diverse array of scientific methodologies, each playing a unique role in unraveling the intricacies of criminal investigations. This section delves into three prominent types of forensic evidence: DNA evidence, fingerprint analysis, and trace evidence, elucidating their historical underpinnings, contemporary significance, and the intricate interplay between scientific rigor and legal admissibility.

The advent of DNA evidence marks a watershed moment in forensic science, revolutionizing the landscape of criminal investigations. Originating from the pioneering work of scientists such as Alec Jeffreys in the 1980s, DNA profiling rapidly evolved as a powerful tool for identifying individuals based on their unique genetic code. Milestones include the first successful use of DNA evidence in a criminal trial in the late 1980s, highlighting its potential to exonerate the innocent and implicate the guilty. The Human Genome Project further accelerated advancements, enhancing the precision and reliability of DNA analysis techniques.

In the contemporary criminal justice landscape, DNA evidence stands as the gold standard for linking individuals to crime scenes. Its unparalleled discriminatory power enables the identification of individuals with an exceptionally high degree of certainty. From sexual assault cases to cold investigations, DNA analysis has played a pivotal role in solving crimes and establishing connections between suspects and evidence. Its use extends beyond individual identification, encompassing familial relationships and ancestry tracing, further expanding its utility in criminal investigations.

The acceptance of DNA evidence in courts is contingent upon adherence to stringent legal standards. The Frye and Daubert standards serve as benchmarks for the admissibility of scientific evidence, necessitating that techniques be scientifically valid and reliably applied. The legal landscape surrounding DNA evidence has evolved, with courts recognizing its reliability while simultaneously addressing issues such as contamination, lab protocols, and the statistical interpretation of results. Understanding these legal standards is essential for both the prosecution and the defense to ensure the fair and just presentation of DNA evidence in the courtroom.

Fingerprint analysis, a classic forensic technique, relies on the uniqueness and persistence of friction ridge skin patterns. The principles of fingerprint identification rest on the idea that no two individuals share identical ridge patterns, making fingerprints a reliable means of personal identification. Henry Faulds and Sir Francis Galton made pioneering contributions in the late 19th century, laying the groundwork for modern fingerprint analysis. The Automated Fingerprint Identification System (AFIS) introduced in the late 20th century marked a significant technological leap, enhancing the efficiency of fingerprint matching.

While fingerprint analysis is a powerful forensic tool, it is not without limitations and challenges. Variability in print quality, distortion due to pressure and surface conditions, and the potential for human error in matching prints all pose challenges to the accuracy of fingerprint analysis. Moreover, the subjective nature of latent print examination has raised concerns about the potential for bias and the need for standardized protocols to mitigate these issues. Understanding these limitations is crucial for both forensic practitioners and the legal system in evaluating the probative value of fingerprint evidence.

The admissibility of fingerprint evidence in court is contingent upon demonstrating the reliability and validity of the identification process. Courts generally admit fingerprint evidence based on the expert testimony of qualified examiners who can articulate the methodology, procedures, and conclusions reached. Challenges to the admissibility of fingerprint evidence often revolve around issues of reliability, sufficiency of methodology, and potential examiner bias. Legal standards, such as those established in the Daubert ruling, play a critical role in determining the admissibility of fingerprint evidence, ensuring that it meets the requisite scientific and legal standards.

Trace evidence involves the analysis of small fragments or residues that can provide crucial links between individuals and crime scenes. Examples include fibers, hair, glass, paint, and gunshot residue. Despite their minuscule size, these traces can yield significant information about the events surrounding a crime, contributing to the establishment of connections between suspects, victims, and crime scenes.

Analyzing trace evidence requires advanced techniques and technologies, such as scanning electron microscopy, infrared spectroscopy, and mass spectrometry. These methods allow forensic scientists to characterize and compare trace materials with a high degree of precision. The development of increasingly sophisticated analytical tools has expanded the range of trace evidence that can be effectively analyzed, enhancing its relevance in criminal investigations.

The importance of trace evidence is exemplified by numerous case studies where seemingly insignificant fragments played a decisive role in solving crimes. From the fibers connecting a suspect to a crime scene to the paint chips identifying a vehicle’s involvement, trace evidence has been instrumental in providing crucial leads and supporting other forms of forensic evidence. These case studies underscore the invaluable role that trace evidence plays in enhancing the overall evidentiary mosaic presented in criminal trials.

This comprehensive exploration of DNA evidence, fingerprint analysis, and trace evidence provides a nuanced understanding of the diverse techniques employed in forensic investigations. The subsequent sections will delve into the complexities of expert testimony and the challenges associated with presenting forensic evidence in the courtroom, further illuminating the intricate intersection of science and law within the criminal justice system.

Expert Witnesses and the Presentation of Forensic Evidence

In the adjudication of criminal trials, the role of expert witnesses is integral to the effective presentation and interpretation of forensic evidence. This section delves into the qualifications and multifaceted roles of forensic experts, the nuances of expert testimony and cross-examination, and the intricate dynamics of how jurors perceive and process complex scientific information.

Forensic experts are individuals with specialized knowledge and training in scientific disciplines relevant to criminal investigations. Typically possessing advanced degrees in fields such as forensic science, biology, chemistry, or anthropology, these experts undergo rigorous training to develop the necessary skills for analyzing and interpreting evidence. Educational background often extends beyond formal degrees to include specialized workshops, seminars, and ongoing professional development to stay abreast of evolving scientific methodologies.

Certification and accreditation serve as benchmarks for the competency and reliability of forensic experts. Professional organizations and accrediting bodies, such as the American Board of Criminalistics or the International Association for Identification, establish standards for proficiency and ethical conduct. Forensic experts often seek certification in their respective disciplines, providing a tangible demonstration of their commitment to maintaining high professional standards.

The ethical considerations surrounding forensic experts encompass issues of impartiality, transparency, and the avoidance of conflicts of interest. Forensic experts are duty-bound to conduct their analyses objectively, without bias toward either the prosecution or the defense. Adhering to ethical guidelines is crucial to maintaining the integrity of the forensic process and ensuring that expert testimony is grounded in scientific principles rather than advocacy.

The presentation of expert testimony is a meticulous process that requires not only scientific expertise but also effective communication skills. Forensic experts must distill complex scientific concepts into accessible language for the court, legal professionals, and, most importantly, the jury. Preparation involves reviewing case materials, formulating clear and concise explanations, and anticipating potential challenges during cross-examination.

Cross-examination is a crucial phase where the validity and reliability of expert testimony may face challenges from opposing counsel. Forensic experts must navigate questions that probe the limitations of their analyses, the potential for human error, or alternative interpretations of the evidence. Challenges may arise regarding the methodology employed, the accuracy of results, or the expert’s qualifications. The ability of forensic experts to withstand cross-examination without compromising the integrity of their analyses is a key determinant of the evidentiary weight assigned to their testimony.

The admissibility of expert testimony is subject to judicial gatekeeping, where the court evaluates the reliability and relevance of the proposed testimony. The Daubert standard, derived from the landmark case Daubert v. Merrell Dow Pharmaceuticals, outlines criteria for admitting scientific evidence. Courts assess the methodology’s scientific validity, peer review, error rates, and general acceptance within the relevant scientific community. Meeting the Daubert standard is imperative for forensic experts, as it establishes the scientific foundation of their testimony and ensures the court receives reliable and relevant information.

Forensic evidence, often rooted in complex scientific principles, poses challenges for juror comprehension. Jurors, who may lack scientific backgrounds, must grapple with technical terms, methodologies, and statistical probabilities. Effective communication from forensic experts is paramount, requiring the ability to convey complex information in a manner that is accessible and comprehensible to a lay audience.

Forensic experts walk a fine line in conveying the certainty of their findings while acknowledging the inherent uncertainties in scientific analyses. Communicating the limitations of methodologies, the potential for false positives or negatives, and the impact of contextual factors is essential for fostering a nuanced understanding among jurors. Striking this balance is crucial for preventing the misinterpretation of forensic evidence and ensuring that jurors make informed decisions.

Juror decision-making is influenced by various psychological factors, including cognitive biases, preconceptions, and the weight assigned to expert testimony. The “CSI effect,” for instance, describes the phenomenon where jurors, influenced by crime dramas, may have unrealistic expectations of forensic evidence. Understanding these psychological dynamics is crucial for both forensic experts and legal practitioners, as it informs strategies for presenting evidence effectively and mitigating the impact of cognitive biases on the evaluation of forensic testimony.

This exploration of expert witnesses and the presentation of forensic evidence underscores the multidimensional nature of their roles within the criminal justice system. The subsequent section will delve into the challenges and controversies surrounding the use of forensic evidence, shedding light on issues such as reliability, emerging technologies, and critiques from legal scholars.

Challenges and Controversies Surrounding the Use of Forensic Evidence

The application of forensic evidence in the criminal justice system is not without its challenges and controversies. This section explores the reliability and potential errors associated with forensic analyses, the impact of emerging technologies, and critiques from legal scholars and advocacy groups that have sparked calls for reforms and improvements.

Despite advancements in forensic science, the margin of error remains a critical concern. Forensic analyses, conducted by human examiners and influenced by subjective interpretations, are inherently susceptible to errors. DNA analysis, for instance, though highly accurate, is not infallible, and issues such as contamination, sample degradation, or misinterpretation of results can introduce errors. Addressing the margin of error requires ongoing efforts to standardize procedures, enhance training programs, and implement quality control measures to minimize the likelihood of inaccuracies.

Several high-profile cases have shed light on instances where forensic evidence led to erroneous conclusions. The Innocence Project, among other advocacy organizations, has worked to exonerate individuals wrongfully convicted based on faulty forensic evidence. These cases underscore the importance of scrutinizing forensic methodologies, ensuring the accuracy of analyses, and revisiting convictions when errors come to light. Learning from these instances is crucial for refining forensic practices and mitigating the risk of miscarriages of justice.

The landscape of forensic science is continually evolving with technological advancements presenting both opportunities and challenges. Emerging technologies, such as next-generation sequencing in DNA analysis, advanced imaging techniques in fingerprint analysis, and artificial intelligence applications in pattern recognition, offer unprecedented capabilities for forensic investigations. These technologies can enhance the precision and efficiency of analyses, providing valuable insights. However, their integration into forensic practices necessitates careful consideration of validation, standardization, and potential biases to ensure their reliability and admissibility in court.

The rapid pace of technological innovation in forensic science raises ethical considerations and legal implications. Questions about the ethical use of emerging technologies, the potential for biases in algorithmic analyses, and the privacy implications of genetic databases pose challenges for forensic practitioners and legal professionals alike. Striking a balance between harnessing the benefits of technological advancements and safeguarding individual rights is essential to navigating the ethical and legal landscape surrounding the integration of emerging technologies in forensic investigations.

Forensic evidence has faced scrutiny from legal scholars and advocacy groups, prompting a critical examination of its role within the criminal justice system. Criticisms often center on issues such as the lack of standardized protocols, the potential for examiner bias, and the overreliance on certain forensic methods. Concerns about the subjective nature of latent fingerprint analysis, the lack of uniform proficiency testing, and the need for transparent validation studies have been articulated by scholars and organizations advocating for improvements in forensic practices.

In response to these critiques, there have been calls for comprehensive reforms and improvements in the use of forensic evidence. Proposals include the establishment of national standards, increased transparency in forensic practices, and the implementation of independent oversight mechanisms. The National Academy of Sciences’ landmark report on forensic science underscored the need for systematic reform, advocating for research to strengthen the scientific foundation of forensic disciplines and the creation of an independent federal entity to oversee forensic practices.

As forensic science continues to grapple with challenges and controversies, addressing these concerns becomes imperative to uphold the integrity of the criminal justice system. The subsequent section will conclude the article by summarizing key insights, reaffirming the significance of forensic evidence, and exploring potential future directions and considerations for enhancing its role in criminal trials.

Forensic evidence, as explored throughout this comprehensive article, occupies a central and indispensable role within the United States criminal justice system. As we conclude this discussion, it is crucial to recap the significance of forensic evidence, reflect on the delicate balance between its benefits and challenges, and consider future directions for enhancing its role in the pursuit of justice.

Forensic evidence stands as a linchpin in criminal investigations and trials, offering unparalleled insights into the circumstances surrounding criminal activities. From DNA analysis, fingerprint examinations, to the nuanced world of trace evidence, each forensic discipline contributes uniquely to the assembly of a comprehensive evidentiary mosaic. The historical milestones and advancements in forensic science underscore its transformative impact on the pursuit of truth, aiding both the prosecution and the defense in establishing or challenging the narratives presented in the courtroom.

The utilization of forensic evidence, while undeniably powerful, is not without its challenges. The reliability of analyses, the potential for errors, and the subjective nature of some forensic methods necessitate ongoing scrutiny and refinement. Recent cases highlighting forensic errors have underscored the importance of vigilance in ensuring the accuracy of analyses and the need for continuous improvement in forensic practices. Striking a balance between harnessing the benefits of forensic evidence and addressing its inherent challenges is crucial for maintaining public trust and upholding the principles of justice.

Looking ahead, there are several considerations and potential future directions that can enhance the role of forensic evidence in criminal trials. Firstly, the integration of emerging technologies, such as advanced DNA sequencing and artificial intelligence, holds promise for improving the precision and efficiency of forensic analyses. However, careful validation, standardization, and ethical considerations must accompany the adoption of these technologies.

Moreover, addressing the critiques and concerns raised by legal scholars and advocacy groups requires a commitment to ongoing reforms. National standards, increased transparency, and the establishment of independent oversight mechanisms can contribute to the enhancement of forensic practices. The development of uniform proficiency testing, the conduct of transparent validation studies, and the promotion of interdisciplinary collaboration between forensic practitioners and researchers are pivotal for advancing the reliability and admissibility of forensic evidence.

In conclusion, forensic evidence remains an invaluable asset in the pursuit of justice, providing a scientific lens through which the complexities of criminal investigations are deciphered. The challenges and controversies surrounding its use underscore the need for a commitment to continuous improvement, ethical conduct, and adherence to rigorous standards. By navigating these challenges and embracing advancements in forensic science, the criminal justice system can ensure that forensic evidence continues to play a pivotal and reliable role in the quest for truth and justice.

Bibliography

  • Cole, S. A. (2015). “Forensic Science and Wrongful Convictions: From Exposer to Contributor.” American Journal of Public Health, 105(2), 203–209.
  • Dror, I. E., Charlton, D., & Péron, A. E. (2006). “Contextual information renders experts vulnerable to making erroneous identifications.” Forensic Science International, 156(1), 74–78.
  • Jasanoff, S. (2016). “Rethinking Forensic Identification.” Law, Culture and the Humanities, 12(2), 276–290.
  • Kafadar, K., & Evett, I. W. (2018). “Forensic identification evidence: assessing the probative value of DNA evidence.” Law, Probability and Risk, 17(2–3), 149–172.
  • Koehler, J. J. (2002). “Forensic Science: Under the Microscope.” American Psychologist, 57(5), 402–404.
  • Mnookin, J. L., Cole, S. A., Dror, I. E., Fisher, B. A., Houck, M. M., Inman, K. E., … & Risinger, D. M. (2011). “The need for a research culture in the forensic sciences.” UCLA Law Review, 58, 725.
  • National Institute of Justice. (2016). “Strengthening the National Forensic Science System: A Path Forward.” U.S. Department of Justice.
  • National Research Council. (2009). “Strengthening Forensic Science in the United States: A Path Forward.” National Academies Press.
  • Neumann, C., & Evett, I. (2001). “The nature and probative value of fingerprint evidence: some preliminary observations.” Forensic Science International, 128(3), 205–208.
  • Peterson, J. L., Somers, M. M., & Swofford, H. J. (2017). “Forensic Science Reform: Protecting the Innocent.” Taylor & Francis.
  • Pyrek, K. M. (2007). “Forensic Science Under Siege: The Challenges of Forensic Laboratories and the Medico-Legal Death Investigation System.” Academic Press.
  • Risinger, D. M., Saks, M. J., Thompson, W. C., & Rosenthal, R. (2002). “The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion.” California Law Review, 90(1), 1–56.
  • Saks, M. J., & Koehler, J. J. (2005). “The Coming Paradigm Shift in Forensic Identification Science.” Science, 309(5736), 892–895.
  • Stoney, D. A. (2001). “Forensic Science: A Societal Impact.” Science & Justice, 41(2), 71–74.
  • Thompson, W. C., & Schumann, E. L. (2013). “The New Wigmore: A Treatise on Evidence – Expert Evidence” (3rd ed.). Aspen Publishers.
Rethinking Presentations in Science and Engineering                                                                                                                                  Michael Alley, Penn State
>

​ 9 July 2024: Free Presentation Workshop for Summer REU Students

. The assertion-evidence approach calls on you to build your talks on messages, not topics. In this approach, you support those messages with visual evidence, not bulleted lists. Moreover, to explain that evidence, you fashion sentences on the spot. Although requiring more effort, this assertion-evidence approach leads to higher understanding by the audience, as evidenced by  ​


The

Picture

.

Christine Haas, a professional presentations instructor, discusses how to incorporate your own presentation into an assertion-evidence template.

Hannah Salas, who is a undergraduate mechanical engineer from University of Nevada at Las Vegas, summarizes her NSF research experience for undergraduates (REU). This research experience occurred at Penn State.


University Park, PA 16802

46 CFR § 201.131 - Presentation of evidence.

(a) Testimony. Where appropriate, the Presiding officer may direct that the testimony of witnesses be prepared in written exhibit form and shall be served at designated dates in advance of the hearing. Evidence as to events occurring after the exhibit-exchange dates shall be presented by a revision of exhibits. Witnesses sponsoring exhibits shall be made available for cross-examination. However, unless authorized by the presiding officer, witnesses will not be permitted to read prepared testimony into the record. The evidentiary record shall be limited to factual and expert opinion testimony. Argument will not be received in evidence but rather should be presented in opening and/or closing statements of counsel and in briefs to the presiding officer subsequently filed.

(b) Exhibits. All exhibits and responses to requests for evidence shall be numbered consecutively by the party submitting same and appropriately indexed as to number and title and shall be exchanged on dates prior to the hearing prescribed in the prehearing rulings. Written testimony should be identified alphabetically. Two copies shall be sent to each party and two to the presiding officer. No response to a request for evidence will be received into the record unless offered and received as an exhibit at the hearing. The exhibits, other than the written testimony, shall include appropriate footnotes or narrative material explaining the source of the information used and the methods employed in statistical compilations and estimates and shall contain a short commentary explaining the conclusions which the offeror draws from the data. Rebuttal exhibits should refer specifically to the exhibits being rebutted. Where one part of a multipage exhibit is based upon another part, appropriate cross-reference should be made. The principal title of each exhibit should state precisely what it contains and may also contain a statement of the purpose for which the exhibit is offered. However, such explanatory statement, if phrased in an argumentative fashion, will not be considered as a part of the evidentiary record. Additional exhibits pertinent to the issues may be submitted in a proceeding with the approval of the presiding officer.

(c) Cooperation on basic data. Parties having like interests are specifically encouraged to cooperate with each other in joint presentations particularly in such items as basic passenger, cargo, and scheduling data compiled from official or semiofficial sources, and any other evidence susceptible to joint presentation. Duplicate presentation of the same evidence should be avoided wherever possible.

(d) Authenticity. The authenticity of all documents submitted as proposed exhibits in advance of the hearing shall be deemed admitted unless written objection thereto is filed prior to the hearing, except that a party will be permitted to challenge such authenticity at a later time upon a clear showing of good cause for failure to have filed such written objection.

(e) Statement of position and trial briefs. A written statement of position should be exchanged by all counsel with copies to all other parties prior to the beginning of the hearing: Provided, however, That Public Counsel or counsel for a public body which has intervened as its interests may appear, may offer his statement of position at the conclusion of the evidentiary hearing, unless such is impracticable. This statement should include a showing of the theory of the case of the party submitting the statement and will not be subject to cross-examination. Trial briefs are acceptable but will not be required.

  • Original Article
  • Open access
  • Published: 02 March 2020

Experiences of evidence presentation in court: an insight into the practice of crime scene examiners in England, Wales and Australia

  • K. Sheppard   ORCID: orcid.org/0000-0003-0806-7077 1 ,
  • S. J. Fieldhouse 2 &
  • J. P. Cassella 2  

Egyptian Journal of Forensic Sciences volume  10 , Article number:  8 ( 2020 ) Cite this article

13k Accesses

7 Citations

1 Altmetric

Metrics details

The ability to present complex forensic evidence in a courtroom in a manner that is fully comprehensible to all stakeholders remains problematic. Individual subjective interpretations may impede a collective and correct understanding of the complex environments and the evidence therein presented to them. This is not fully facilitated or assisted in any way with current non-technological evidence presentation methods such as poor resolution black and white photocopies or unidimensional photographs of complex 3D environments. Given the wide availability of relatively cheap technology, such as tablets, smartphones and laptops, there is evidence to suggest that individuals are already used to receiving visually complex information in relatively short periods of time such as is available in a court hearing. courtrooms could learn from this more generic widespread use of technology and have demonstrated their ability to do so in part by their adoption of the use of tablets for Magistrates. The aim of this current study was to identify the types of digital technology being used in courts and to obtain data from police personnel presenting digital evidence in court.

A questionnaire study was conducted in this research to explore current technology used within courtrooms from the perspective of crime scene personnel involved in the presentation of complex crime scene evidence. The study demonstrated that whilst many of the participants currently utilize high-end technological solutions to document their crime scenes, such as 360° photography or laser scanning technologies, their ability to present such evidence was hindered or prevented. This was most likely due to either a lack of existing technology installed in the court, or due to a lack of interoperability between new and existing technology.

This study has contributed to this academic field by publishing real life experiences of crime scene examiner’s, who have used advanced technology to record and evaluate crime scenes but are limited in their scope for sharing this information with the court due to technological insufficiency. Contemporary recording techniques have provided the opportunity for further review of crime scenes, which is considered to be a valuable property over previous documentation practice, which relied upon the competency of the investigator to comprehensively capture the scene, often in a single opportunity.

Introduction

The delivery of evidence in the UK Courts of Law in part involves extensive oral descriptions of events and evidence from an investigation, which can be a time consuming and laborious task (Schofield 2016 ). In terms of evidence relating to a crime scene, verbal statements, printed photographs and sketches of the scene may be used (Lederer 1994 ; McCracken 1999 ).

Conveying evidence from a scene, which both experts and laypersons can fully understand, remains an “ever-difficult task” (Chan 2005 ). This is because individuals may misinterpret or find difficulty in understanding the information being described to them (Schofield and Fowle 2013 ). It is entirely likely that cognitive processes contribute to variance in the interpretation of the evidence amongst listeners, and perhaps unsurprisingly, a survey conducted by the American Bar Association ( 2013 ) has demonstrated that significant volumes of technical information or complex facts can not only overwhelm the jury, but also often confuses them, leaving them feeling bored and frustrated (Kuehn 1999 ; Schofield 2009 ). In turn, this can present difficulties in absorbing and retaining information (Krieger 1992 ). Lederer and Solomon ( 1997 ) noted an increase in people’s attention when moving object displays were used in the courtroom.

There have been research studies which have investigated and considered the effects and impact that evidence presentation methods may have on jurors’ decisions in the courtroom (Schofield 2016 ; Schofield and Fowle 2013 ; Dahir 2011 ; Kassin and Dunn 1997 ; Dunn et al. 2006 ; Schofield 2011 ). Alternative research has started to develop our understanding of the effects that technology may have on jurors and the decisions which they make in the courtroom (Burton et al. 2005 ). Whilst visual presentation methods offer significant advantages in presenting complex evidence in an understandable way, research would suggest that such methods could also mislead, or unfairly persuade a jury (Schofield 2016 ; Burton et al. 2005 ).

Manlowe ( 2005 ) details the practical considerations which need to be made before introducing visual presentations into the courtroom, such as whether the technology installed permits graphical displays to be presented. Manlowe ( 2005 ) advocates the use of visual evidence in the courtroom in combination with oral presentations, as it has been found that jurors can retain six times as much information when compared with just oral presentations alone. Schofield and Fowle ( 2013 ) also extensively described the advantages and disadvantages associated with different graphical technologies for presenting evidence in the courtroom, and provided guidelines for using such evidence.

Given the availability of technical devices, such as tablets, smartphones and laptops, there is some evidence to suggest that individuals are used to receiving high-impact information in relatively short periods of time (Manlowe 2005 ; Pointe 2002 ). This information is highly visual, and as it utilizes technology might suggest that members of the court, including the jury, are equipped for a shift towards an increase in the quantity of visual data and technological advancement. It might also suggest that traditional methods of presenting evidence relating to a crime scene, such as sketches and photographs lack the flexibility and ability to deliver the intended information in a comprehensive manner. According to Manlowe ( 2005 ), basic demonstrative exhibits in the courtroom were time consuming and expensive and were limited in their ability to be edited. Technological advancements in the presentation of crime scene evidence include scene recording and visualization (Schofield 2016 ). Such technology ultimately aims to facilitate effective and rapid communication of crime scene environments between users within law enforcement agencies and in court (O’Brien and Marakas 2010 ; Manker 2015 ).

The presentation of forensic evidence using reconstructed virtual environments, such as computer-generated (CG) displays and virtual reality (VR) have been developed through the necessity to improve jurors’ understanding of complex evidence without technical, jargon-filled explanations. It is thought that jurors place more credibility on what they can “see and touch” (Schofield 2009 ). Virtual environments present unique opportunities to visually illustrate a scene, with the ability to “walk through” and virtually interact with the environment, and this can be more compelling for juries (Agosto et al. 2008 ; Mullins 2016 ). Howard et al. ( 2000 ) explored the use of virtual reality to create 3D reconstructions of crime scenes and demonstrated that the system they introduced made the evidence being presented to them easier to comprehend, and substantially shortened the length of trials.

Panoramic photography is another means of technological advancement that has been used to aid the presentation of crime scene evidence. In 2014, a 360° panorama was used to demonstrate material as part of a murder trial. The jury in Birmingham experienced a virtual “walk through” of a scene for a murder trial, created using an iSTAR® panoramic camera (NCTech). Warwickshire Police have used an iSTAR® camera to document serious road traffic collisions (RTCs), which contributed to the evidence revealed during the trial of Scott Melville for the murder of Sydney Pavier. Principal Crown advocate of the Crown Prosecution Service, Peter Grieves Smith commended the technology used stating “It was invaluable footage that greatly assisted the jury in understanding the layout of the property. It will surely become the norm to use this in the future in the prosecution of complex and grave crime”. Judge Burbidge QC also commended Warwickshire Police for their professional pursuit of justice in this case.

Reportedly, the state of courtroom technology integration differs significantly around the world (Manker 2015 ; Reiling 2010 ; Ministry of Justice 2013 ). Basic technology, such as tablets and television screens are being used within some courtrooms in the USA and Australia (Schofield 2011 ) with a limited number integrating more high-end technological solutions, such as CG presentations in the USA (Chan 2005 ). The integration of technology within the UK courtrooms is still in its infancy and is a significantly slower process than the USA or Australia (Schofield 2016 ). As part of a strategic new plan introduced in 2014, the UK criminal justice system was due to be transformed through digital technology. The plan sought to make courtrooms “digital by default” with an end to the reliance on paper by 2016, and to provide “swifter justice” through the digital dissemination of information (Ministry of Justice 2013 ). The ultimate aim was to digitize the entire UK criminal justice system by 2020, to simplify processes and improve efficiency. In 2013, Birmingham’s Magistrates court produced the UK’s first digital concept court, a courtroom that trialled technology to aid in the speed and efficiency of trials using laptops to store electronic case files as opposed to large paper folders, and to facilitate the sharing of files with other members of the courtroom.

In 2016, the UK National Audit Office conducted an investigation to determine the current situation of courtrooms in terms of the digital reform. Results demonstrated how some parts of the criminal justice system were still heavily paper based, creating inefficiencies. The report concluded that the time frames that were originally employed, were overambitious (National Audit Office 2016 ).

The aim of this study was to explore the current situation regarding technology use in courtrooms from the perspective of persons involved in the presentation of crime scene evidence, and to explore barriers and facilitators to its greater and effective use. In this study, the following objectives were considered: to establish the state of current literature associated with the use of technology in courtrooms; to obtain data regarding the experiences of the UK police service personnel with respect to presenting digital evidence in courtrooms; to identify the types of technology that are currently being utilized in courtrooms in the UK; to seek the opinions of police service personnel with regard to digital technology use in the courtrooms and to use these outcomes to define a fresh starting point to debate the exploitation of digital technology use in the UK courtrooms to facilitate more efficient, better value for money and robust judgements with complex forensic content.

The study has focused on the experiences of crime scene personnel because of the advancements of technology in this particular area, such as the use of 360° photography and laser scanning. The subject area also falls within the remit of the research team. By sharing opinions and experience, the paper hopes to aid both legal professionals and police service personnel to a more comprehensive understanding of the current use of technology in the courtroom, the advantages which technology can provide to their case, and the barriers which have been affecting the adoption of technology.

Participant questionnaires

A qualitative phenomenological research study was conducted to explore the experiences of police service personnel regarding the current use of information technology in courtrooms and in their experience of evidence presentation. The sample group included vehicle collision investigators and forensic photographers/imaging technicians. A snowball sample of 21 police service personnel from England and Wales and Australia were recruited via email and a UK police forum for participation within this study. It was considered useful to recruit participants from these countries because of the similarities with their respective criminal justice systems (McDougall 2016 ) but where differences in the rate of technology integration had also been previously reported (Schofield 2016 ) which could offer meaningful and experience based solutions in technological advancement.

Participants were required to formally consent to participation in line with the ethical requirements of the host institution. Participants were emailed a semi-structured, open-ended questionnaire and were asked to type or handwrite their responses. The questions asked were as follows:

What is your job title and role within the criminal justice system?

As part of your role, are you required to present evidence in a courtroom?

Can you tell me what, if any, technology has been integrated into the courtroom?

What has your experience been in terms of the introduction of new technology into the courtroom?

Have there been any difficulties with technology being integrated into the courtroom?

With the implementation of technology with existing and current courtroom systems?

And whether there have been barriers, if any, to the adoption of such technology?

If there has not, why do you think this is?

In terms of the current methods with which forensic evidence is presented in court, do you think anything needs to be changed? Please explain.

What has your experience been with the presentation of evidence in court? Please explain.

New technology is becoming available to police services and forensic services for the documentation and presentation of crime scenes. 360° photography or laser scanning is being implemented into police services to speed up the data capture as well as to capture more detail and information from the scene.

Have you had any experience in this area—do you yourself use these methods for documenting crime scenes?

Have you ever had to present this type of evidence in court? Please explain.

What has the response been to this method of presenting evidence

From the judges?

Barristers?

The jury members?

Is the courtroom fully equipped to allow you to present this type of evidence? Please explain.

Do you feel there is anything, which needs improvement? Please explain.

Can you give me your opinion on presenting evidence in this manner? Advantages/disadvantages.

Data analysis

Thematic analysis based on Manker ( 2015 ) methodology, originally adapted from Guest et al. ( 2012 ), was used to analyse the data that was collected from the 21 participants. The data analysis consisted of breaking down and coding the text responses obtained from the participants’ questionnaires, to identify themes and to construct thematic networks. A computer software program NVivo was used to store, organize and code the open-ended data collected from participants. Participant text responses were re-structured within an Excel spread sheet and the data set uploaded into the NVivo software. The data was explored using the NVivo software through word frequency queries to analyse the most frequently used words in the participant data. Emerging themes were identified and coded using specific keywords or “nodes”. Nodes were created based on these recurring themes, and any responses were coded at the relevant nodes. For example, for question 11 which asked the participants “What has the response been to this method of presenting evidence”, potential responses from participants could suggest a good response, a bad response, little response, no response or not applicable. These identified nodes would allow the researcher to link a node to the relevant response from participants. Within the NVivo software, the researcher could search nodes and easily identify all participants who had the same response. This was used to analyse the different themes identified within the participant data. As the analysis of the data progressed, new nodes were identified and these were checked against all other participants.

Thematic categories were determined by the researchers: to include courtroom technology, ease of use, implementation, limited use, recommendations, advantages and disadvantages. Some of the thematic categories were further broken down to include additional related categories. For example, courtroom technology was further broken down to include specific categories such as television screens, audio-visual technology, computers, 360° photography and laser scanning.

The nodes were associated with the thematic categories described above. The participant responses were analysed, described and tables created which documented the number of respondents to have reported such a response relevant to the nodes. The nodal frequency within each theme was used to determine the existence of trends within the data.

Results and discussion

The purpose of this qualitative phenomenological research study was to explore and describe experiences of police service personnel with responsibilities within crime scene examination with regard to the current use of technology within the courtroom. This research covered over one third of the total 43 police services within England and Wales (15 services), as shown in Fig.  1 . Each police service has their own policy and procedures for conducting criminal investigations and as such different individuals within the same police service would likely follow the same procedures.

figure 1

Map to show the 15 police service regions represented by the participants who completed the questionnaire (highlighted in purple). Adapted from original by HMIC

Although the use of questionnaires allowed exploration of the participants’ experiences regarding the use of technology in the courtroom, they restricted further explanation or prompts for more detail which would be available in interviews. The authors accept that participant responses to questions that are likely to change based on different stimuli, such as the context of the request and their mood, in addition to what information they could recall from memory at that particular time. Consequently, participants may not recollect a particular experience or event at the time that they completed the questionnaire, and as a result may not mention it. In response to this, the paper presents a thematic analysis of the data, where collective themes are presented based on responses from the entire sample group rather than isolated incidents.

A consideration for the authors throughout the study related to the opportunities for participants to respond to questions in a manner that would be viewed favourably. This is termed “social desirability bias” (Manker 2015 ; Saris and Gallhofer 2014 ). As a result, participants may have been inclined to over exaggerate “good behaviour” or under report “bad behaviour”. Reportedly, the effects of social desirability bias is reduced in situations where an interviewer is not present, which is why, in part, the experimental design included questionnaire data. When the data was analysed, six themes were identified. These were “current technology in the courtroom”, “lack of technology in the courtroom”, “difficulties/barriers associated with the integration of technology into the courtroom”, “improvements/changes that are required”, “the future of courtroom technology” and “360° photography and laser scanning”.

Theme 1: Technology used in the courtroom

Within the first theme, participants were asked about their experiences of technology within the courtroom, which prompted responses that described the use of television screens, DVD players/CCTV viewing facilities, basic PC’s/laptops, paper files, photographs, basic audio-visual systems, live link capability, projectors and the specialist software to view 3D data. Four participants described how the current technology within the courtroom was limited to that of traditional paper files and printed albums of photographs. Given the use of the term “technology” within the question, the answers that were given were perceived to describe very basic methods, and some of the participants equally commented that “the courts need to catch up”. Those courtrooms that had initiated technology into trials had implemented what many participants claimed to be “basic and limited audio-visual technology” .

The UK National Audit Office ( 2016 ) identified that courtrooms have been slow to adopt technology and still heavily rely on paper files, which has worked for many years. The experiences described by the participants in this study would support these findings. The reason paper files have worked for many years could be attributed to the fact that people like to have something in their hands that they can see in front of them. Paper files and photographs allow a jury to look closely and examine what they are being shown, compared with distance viewing of a screen. However, printing photographs often leads to a loss in clarity and detail, which could make it more difficult to interpret what they are seeing. Often, it is the case that something may be visible on screen in a digital photograph that is not visible once recreated through print.

According to the data, the type of court and crime was a factor which determined whether any technology was implemented, and the type of technology that was implemented. For one participant, the majority of their cases were produced for the coroner’s courts, who were reportedly “yet to embrace” new evidential technology. It was also noted, however, that although slow to embrace technology, in the majority of cases at the coroner’s court, it was not needed.

Theme 2: Lack of technology in the courtroom

According to the results of this study, little technology had reportedly been implemented into the courtrooms. One participant stated that, “there has been little investment by the courts in modern technology” and “generally there hasn’t been any [implementation] and under investment seems to have been the greatest problem”.

Some of the participants described how limited technology had negatively impacted upon their ability to appropriately present evidence in court. In one instance the following scenario was described:

I was presenting evidence on blood spatter in court. The jury were looking at photocopies taken from the album of blood spatter on a door. So I had to ask the jury to accept that there were better quality images where the spatter could be seen and I was able to interpret the pattern. Not only does this allow a barrister to claim I was making it up but, it is much easier to explain something if people can see it.

A similar experience was reported by another participant, who took personal measures to aid their presentation of evidence:

I had to show each individual juror an original printed photograph from the report I had brought with me as those provided in their bundle were of such poor quality that the subject of my oral evidence was not clearly visible to them.
Primarily evidence is verbal, [and that the] presentation of photographs are by way of rather dodgy photocopied versions lovingly prepared by the Crown Prosecution Service (CPS).

The significance of these statements relates to the potential for the evidence under presentation to be misunderstood or unfairly dismissed, which has implications for the case. These experiences would suggest that the most basic opportunities to provide equivalent quality photographs to the jury were missed. Forensic evidence is often highly visual, and even with an articulate speaker and extensive descriptive dialogue, the ability to effectively communicate the appearance and location of evidence such as blood spatter is likely to be strengthened by effective visual aids. Aside from high quality photographs, alternative digital presentation methods, such as portable screening devices may have provided an appropriate and just communication of the evidence.

Burton et al. ( 2005 ) and Schofield ( 2016 ) each made reference to the effects of visual presentation methods on jurors’ interpretation of evidence. In this research, reference has been made to actual evidence and not reconstructed scenarios; therefore, in our opinion, visual presentation opportunities to illustrate complex evidence such as blood spatter is only likely to improve jurors’ understating of the evidence being presented to them. It may also improve jurors’ retainment of information, as demonstrated by Manlowe ( 2005 ).

Paper files in the courtroom are still heavily relied upon, with the UK’s Crown Prosecution Service (CPS) producing roughly 160 million sheets of paper every year (Ministry of Justice 2013 ). In addition to the limited presentation quality of photocopied images, printed copies of two dimensional presentations were also criticized in terms of their inability to interact with jury members, as follows:

Tend to be clumsy and fill the witness box with paper that is pointed to in front of the witness and this is never conveyed to the jury.
If, maybe through the use of tablets, or some form of interactive media, this could be displayed on screen, then the witnesses’ thoughts and explanations may be better conveyed to the jury.

For other participants, the use of printed paper was seemingly appropriate:

For most cases, a simple 2D plan and photographs is more than sufficient. There is the ability to produce flashy reconstruction DVD’s, but I think there is a huge danger of a reconstruction showing things that did not happen, putting images to the court and jury that may only be a representation of a possible scenario rather than what is definite. This is particularly true for collision investigation where there are often unknowns and using a computer model cannot be certain that is what happened. Videos shown are talked through as they are run.

In this instance, the opposite explanation appears to be true. Here, the participant is suggesting that technology could facilitate the presentation of inappropriate and misrepresenting evidence, equally impacting negatively on the case. This would reasonably support the idea that the use of technology should be considered in the context of the evidence under presentation, and/or used in instances where facts are being communicated. The experiences described by this participant implied that the photographs that they had used had adequately supported the presentation of their evidence.

In cases where multiple types of evidence were being presented, the need for technology reportedly varied, but its availability was also restricted for some participants.

One participant described,

to date, I haven’t used any visual aids/props. Generally, I will have compiled a report, which contains photographs and a scale plan, but as part of the wider investigation there may be digital data such as CCTV footage, 3D laser scans and animated reconstructions. My evidence is given orally and the relevant sections of the jury bundle referred to for context. I have presented a case involving CCTV footage which was played on too small a screen for the jurors to see properly, therefore making it difficult for them to understand the intricacies of what it showed. The footage itself had to be provided in a format that could be played in a DVD player present in the courtroom, leading to an overall reduction in quality.

The restrictive nature of this environment for the presentation of CCTV evidence is surprising in a society that thrives on visual media. In this example, the presentation of evidence has been compromised for the cost of a larger screen, or the distribution of visual display devices, such as tablets. In terms of operation, these devices simply need to facilitate functions such as “play”, “stop” and “pause”. If there is a concern that jury members may be unable to comply, there are options to screen mirror devices, thus giving control to a single competent user. It was reported by an Australian participant that some courtrooms already had individual screens for each jury member. Many courtrooms in the USA had also installed multiple computer screens or individual tablets for the jury so that evidence was more easily viewed (Schofield 2016 ; Wiggins 2006 ).

One of the UK participants claimed that,

until the improvement of the visual aids for the jury i.e. much larger or closer/individual monitors are implemented even the products we provide at the moment are of limited use in the courtroom.

Any concern over difficulties with technology operation by jury members should be considered alongside the fact that according to the Office of Communications (Ofcom), in 2017, 76% of adults living in the UK had a smartphone; therefore, the authors question whether courtroom technological advancement should account for this and look at the cultural shift in technology. This was supported with the data, where a participant made reference to the introduction of technology into the courtroom stating how it can

depend very much on the attitudes of the judge, prosecutors and investigators. Some are technologically averse whilst others are happy to accommodate new technology.

In the USA, the courtroom 21 project (founded in 1993) has sought to address issues with technology integration into courtrooms by active research, demonstrating the software and hardware to users, as well as discussing ideas for use in court. This could be a useful learning opportunity for alternative justice systems moving forward, given that an evaluation of US courts in Rawson ( 2004 ) revealed some similarity between the US and UK current practice. There is some evidence to suggest that evidence presentation in the USA is similarly restricted by technological advancement.

The use of live links or videoconferencing, which allows expert witnesses to present their testimony off site was reported by two participants. This type of technology is widely used within courtrooms by police officers that can remain working until required to present evidence, to interview vulnerable witnesses, and to arrange suitable dates for a defendant’s trial. This is believed to save time and money transporting defendants to the courtroom location for hearings.

Theme 3: Difficulties/barriers associated with the integration of technology into the courtroom

This study highlighted some of the difficulties participants had experienced with the integration of technology into the courtroom and problems arising with the already installed basic courtroom equipment. One participant described,

people always seem to be finding their feet when trying to play with digital evidence, making things connect and work. Also, the actual devices are not always reliable

A lack of training and knowledge regarding existing technology was identified by several participants. One participant described the frustrations of the situations when technology was not operated correctly, describing,

the court clerk always seems to have difficulty getting the existing system to work correctly, albeit a DVD player. It is a great source of frustration for all involved.
we occasionally use video footage, which has to be converted to DVD format to play at court –assuming the usher knows how to work it.

This raises a training issue within courtrooms, which was supported by the Rt Hon Sir Brian Leveson in his review of efficiency in criminal proceedings (Leveson 2015 ). In this document, the Rt Hon Sir Brian Leveson highlighted the requirement for judges, court staff and those individuals who have regular access to courtroom technology to be sufficiently trained. In addition, he highlighted the need for technical assistance to prevent underutilisation of technology due to technological failures, or defective equipment, which often delay proceedings (Leveson 2015 ). In 2014, 13 cases in Crown court and 275 in Magistrates were postponed because of problems with technology. The National Audit Office ( 2016 ) reported that the police had so little faith in the courts equipment that they hired their own at a cost of £500 a day.

Issues regarding the compatibility of technology in the courtroom and a lack of staff training are not restricted to the UK. A report generated by the Attorney General of New South Wales, Australia, identified the same issues arising from technology in the courtroom (Leveson 2015 ; NSW Attorney Generals Department 2013 ).

Participants’ reported lack of investment/funding as the most commonly occurring “barrier”. According to one participant,

Under investment seems to have been the greatest problem; we have the opportunity to bring 3D interactive virtual scenes to the courtroom for example, however the limited computing power available means that this is impossible and there is little or no will on the part of the Ministry of Justice (MoJ) to invest in this technology.
CPS protocol is resistant to change and it also requires funding.

This supports the work of Manker ( 2015 ), who found that participants considered cost of equipment to be the main reason for the limited use of technology. Although technology may be expensive to purchase in the first instance, the significant returns should outweigh the initial expenditure. For example, technology aided trials may aid juries in understanding evidence, reaching a verdict and thus bringing the case to a close more quickly, reducing case costs and allowing more trials to be conducted concurrently (Marder 2001 ). In addition, there are benefits that cannot be quantified, such as juror satisfaction and engagement through the use of technology over laborious descriptions.

Barriers can also include a resistance to change or a lack of acceptance. One participant commented on the reluctance of individuals to accept new technology;

barriers include reluctance of some judges, investigators and lawyers to consider or implement newer technologies into their investigation or courtroom presentation … these challenges are reducing as time progresses and the technologies are increasingly established and the general paradigm is altered.

In some circumstances it may be necessary to integrate newer systems alongside, or in conjunction with, already existing equipment effectively. In many cases, the technologies may not be compatible, as evidenced through one participant’s response, who described,

the current systems seem incapable of keeping up with the advance on modern technologies or simply do not work more often than not.

Leveson ( 2015 ) found that many judges were in favour of exploiting technology in order to aid in the efficiency of the criminal justice system but had doubts regarding the ability to adapt current technology and its capacity to undertake its current duties.

This is not seemingly consistent with some participants’ experiences of technology outside of the courtroom, but within their investigative roles fear of technology and change also presents a barrier to the adoption of technology, particularly the risks associated with such technological change. Some changes may be successful, and others may not, but until these changes are made, it is impossible to know the outcomes of the technology use and what it can provide to the courtroom (Marder 2001 ).

There is some suggestion that technological change within courtrooms will be adopted. A report by the Ministry of Justice ( 2016 ) explains how the entire UK criminal justice system is being digitized to modernize courts using £700 million government funding. The funding aims to create a new online system that will link courts together. The digitisation of the UK criminal justice system is due to be completed in 2019, and an influx of funding should enable more rapid adoption of technology into the courtrooms.

Theme 4: Improvements/changes required to facilitate technological integration

Seven participants commented that no change in the courtroom was necessary with regards to technology. For example,

I think current methods are sufficient and like I said anything more complicated we provide our own laptop for.

As discussed, the technological requirements for evidence presentation are case specific, which is likely to be more prevalent in areas that utilize technology such as 360° photography and laser scanning.

Eight participants commented that a significant technological upgrade was required within courtrooms to cope with the ever-increasing demand of technology. This was emphasized in the following quotes:

The majority of courtrooms need a radical update. I’d hope that those being built now incorporate the required technology; however, I wouldn’t count on it,
the courts need full modernising,
the basic court infrastructure needs upgrading to allow it to handle the significant increase in demand that comes with the use of 3D animations software,
the court process has changed very little in the 12 years I have been a collision investigator whilst the equipment we use and evidence we produce has changed exponentially.

The adoption of technology to aid with the documentation and recovery of evidence from crime scenes by police services can only support effective evidence presentation with the alignment of such technological advancements in the courtroom. Failure to align technology could mean that such evidence is unlikely to be presented in its most effective format. This change could be alleviated with the standardization of file formats. According to one participant,

standardisation of digital formats used in the courtrooms would help in the preparation of evidence knowing which format to use when supplying evidence, to police and the courts. The most common remark we get from police and the courts regarding digital file formats is “can you supply or convert this or these files to a usable format, we just need it to be playable in court”.

Theme 5: courtrooms of the future

Participants were asked about their thoughts on the future of evidence presentation. Virtual reality (VR) featured within several responses, with the idea being that courtroom users could be transported to a scene, allowing them to view and navigate themselves through it in 3D. Research has been conducted to investigate the use of VR courtrooms, whereby jurors wear VR headsets and are transported to the crime scene, allowing them to explore the scene (Bailenson et al. 2006 ; Schofield 2007 ).

In this study, one participant commented that,

When presenting evidence in an innovative way it generally means in a way that is better for the jury to understand, and that means clarity.
This will provide the ability for jurors, judges and the coroner to revisit a scene without leaving the courtroom and see things from the perspective of various people involved (victim, accused, witnesses).

In terms of its overall aim, one participant commented,

The aim is surely to assist the jury with understanding the complexities of the crime scene and to do that they need to be able to visualise the location and the evidence identified within it so I believe the future of a courtroom will be to provide this as realistically as possible.

This participant does not state what technology will be used to provide this experience to the jury only that the visual evidence will need to be as realistic as possible.

The effectiveness of VR technology for evidence presentation is likely to encourage debate, given the clarity with which crime scenes can be presented, but with the consideration of contextual information and its effects on juror response.

There will however be a fine line between giving a jury enough information with which to make an informed decision and traumatising them in vivid technicolour. Technology should not be adopted for the sake of it as this could have profound effects on the trials outcome. Any evidence presented in a courtroom needs to describe the incident that occurred in a manner which is easily understandable.

Although the perceived benefits of the technology were discussed by some, other participants commented on how VR was “still a long way off from being used for evidence”. Issues regarding the persuasive impact of demonstrative evidence have already been explicitly expressed with regard to 360° photography and laser scanning (Narayanan and Hibbin 2001 ). Other researchers claim that such evidence can lead a jury to blindly believe and accept the evidence, as shown in the work of Schofield and Fowle ( 2013 ) and Selbak ( 1994 ). Consequently, the use of visual presentation using CG could have profound implications on the case outcome if the jurors instantly believe what they are seeing. Evidence presented in such a way must remain scientifically accurate and truthfully reflect the scientific data and augment witness testimony (Manker 2015 ). This was supported by participant comments regarding the probative value of the evidence. Here,

the probity value is yet to be determined, in addition to juries not being allowed on many occasions to witness certain graphic images for fear of being overly influenced. Virtual reality would compound this.

Another participant commented that,

it may be perceived as entertainment rather than a judicial process.

Theme 6: 360° photography and laser scanning

Given the considerable amount of technology available with respect to crime scene documentation, such as 360° photography and laser scanning, and the expertise of the participant group, participants were asked to describe their experiences of such technological advancements.

Most participants (18 out of 21) described how their respective police services currently utilize 360° photography or laser scanning methods to document their crime scenes, but due to limitation of the court, facilities were unable to present such evidence to the courts. In such situations, 3D laser scan data was used to create 2D plans which were then printed for the court. This was criticized by one participant, who expressed their opinion on having to print 2D plans as,

a travesty really when you consider what capability this data offers.

Often, such technology requires access to a data cloud, which raised an issue for two participants for evidence presentation.

One participant stated that it is,

unfortunate as the benefits of the data cloud as a contextual visual aid are unrivalled. In situations where the 3D data was allowed, it was only accepted into the court as a 3D animated “fly-through” played directly from a DVD. This participant stated that using this DVD method it was not possible to move through the scene in real time.

One participant did report being able to successfully present their 360° panoramas.

I was the first to show 360° panoramas along with point cloud data. I had to explain to the court what it was and how it was used prior to the case commencing. We have presented this type of evidence now in live court 3 times and received no criticism. There have been at least another 3 cases where we have produced it but not required to show it. It does require some advanced preparation and several visits to the court room to be used, to make sure it all works.

With the Ministry of Justice driving the adoption of technology and providing significant funding to ensure the uptake of technology by courtrooms, it is inevitable that courtrooms will become “digital by default”. This will provide a more efficient CJS and allow information transfer to become more seamless.

The results of the qualitative phenomenological research in this study identified six key themes from the responses of participants, representing 15 of the current 43 UK police services. The themes covered the “current use of technology in the courtroom”, “lack of technology in the courtroom”, “difficulties/barriers associated with the integration of technology into the courtroom”, “improvements/changes that are required for technology integration”, “the future of courtroom digital technology”, and “360° photography and laser scanning”. The participants reported a general lack of technological integration within any court environments. It was clear that a significant change is required to existing courtrooms and their infrastructure to allow the use of existing technology to be utilized effectively, particularly for crime scene documentation, such as 360° photography or laser scanning from crime scenes or of evidence types. These areas, along with virtual reality represented aspects which participants believed would describe future-proofed courtrooms. However, concerns were voiced by the study group questioned, over the contextual influence that immersive technology may potentially cause and questioned the need to expose jurors to such information. Clearly, not only does digital-technological development within the courtroom require consideration, the attendant psychological benefits and ethical aspects also require developing in parallel to make the use of digital technology a fully useful and integrated feature in the decision-making process of Jurys and the UK courts and to provide a digital end-to-end common platform. As part of the ethical concerns to be addressed and those of “evidence continuity and potential contamination” of data, the opportunity that may exist to manipulate visual images needs to be carefully explored and future-proofed into any systems being developed. The authors firmly believe and attest that there is considerable scope for exploring this area further, although realize that the restricted access for courtroom presentation are likely, which limits the academic study of this area.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Computer generated

Criminal justice system

Crown Prosecution Service

Virtual reality

Agosto E, Ajmar A, Boccardo P, Tonolo FG, Lingua A (2008) Crime scene reconstruction using a fully geomatic approach. Sensors 8:6280–6302

Article   Google Scholar  

American Bar Association (2013) In: Poje J (ed) ABA Legal technology survey report. Volume VI. Mobile Lawyers

Google Scholar  

Bailenson JN, Blasovich J, Beall AC, Noveck B (2006) Courtroom applications of virtual environments, immersive virtual environments and collaborative virtual environments. Law Policy 28(2):249–270

Burton A, Schofield D, Goodwin L (2005) Gates of global perception: forensic graphics for evidence presentation. In: Proceedings of ACM Symposium on Virtual Reality Software and Technology, ACM Press, Singapore, pp 103–111

Chan A (2005) The use of low cost virtual reality and digital technology to aid forensic scene interpretation and recording. Cranfield University PhD Thesis, Cranfield

Dahir VB (2011) Chapter 3: digital visual evidence. 77-112. In: Henderson C, Epstein EJ (eds) The future of evidence: how science and technology will change the practice of law. American Bar association, Chicago

Dunn MA, Salovey P, Feigenson N (2006) The jury persuaded (and not): computer animation in the courtroom. Law Policy 28(2):228–248

Guest G, MacQueen K, Namey E (2012) Applied thematic analysis. Sage, Thousand

Book   Google Scholar  

Howard TLJ, Murta AD, Gibson S (2000) Virtual environments for scene of crime reconstruction and analysis. In: Proceedings of SPIE - International Society for Optical Engineering, p 3960

Kassin S, Dunn MA (1997) Computer-animated displays and the jury: facilitative and prejudicial effects. Law Hum Behav 21(3):269–281

Krieger R (1992) Sophisticated computer graphics come of age—and evidence will never be the same. J Am Bar Assoc:93–95

Kuehn PF (1999) Maximizing your persuasiveness: effective computer generated exhibits. DCBA Brief J DuPage County Bar Assoc, 12:1999-2000

Lederer FI (1994) Technology comes to the courtroom, and.... . Faculty Publications. Emory Law J 43:1095–1122

Lederer FI, Solomon SH (1997) Courtroom technology – an introduction to the onrushing future. In: Faculty Publications. 1653. Conference Proceedings. Part of the Fifth National Court Technology Conference in Detroit, Michigan

Leveson B (2015) Review of efficiency in criminal proceedings by the Rt Hon Sir Brian Leveson. President of the Queen’s Bench Division. Judiciary of England and Wales

Manker C (2015) Factors contributing to the limited use of information technology in state courtrooms. Thesis. Walden University Scholarworks, p 1416

Manlowe B (2005) Speaker, “use of technology in the courtroom,”. IADC Trial Academy, Stanford University, Palo Alto

Marder NS (2001) Juries and technology: equipping jurors for the twenty-first century. Brook Law Rev 66(4). Article 9):1257–1299

McCracken K (1999) To-scale crime scene models: a great visual aid for the jury. J Forensic Identification 49:130–133

McDougall R (2016) Designing the courtroom of the future. Paper delivered at the international conference on court excellence. 27-29 January. , Singapore

Ministry of Justice. (2013) Press release - Damien Green: ‘digital courtrooms’ to be rolled out nationally. Available at: https://www.gov.uk/government/news/damian-green-digital-courtroomsto - be-rolled-out-nationally

Ministry of Justice (2016). Transforming our justice system. By the Lord Chancellor, the Lord Chief Justice and the Senior President of Tribunals September 2016. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/553261/joint-vision-statement.pdf

Mullins RA (2016) Virtual views: exploring the utility and impact of terrestrial laser scanners in forensics and law. University of Windsor. Electronic Theses and Dissertation Paper. University of Windsor, 5855

Narayanan A, Hibbin S (2001) Can animations be safely used in court? Artif Intell Law 9(4):271–294

National Audit Office (2016) A report by the Comptroller and Auditor General: Efficiency in the Criminal Justice System. Ministry of Justice Available from: https://www.nao.org.uk/wpcontent/uploads/2016/03/Efficiency-in-the-criminal-justice-system.pdf

NSW Attorney General’s Department (2013) Report of the Trial Efficiency Working Group. Crim Law Rev Division Available from: http://www.justice.nsw.gov.au/justicepolicy/Documents/tewg_reportmarch2009.pdf

O’Brien JA, Marakas GM (2010) Management information systems, 10th edn. McGraw-Hill, Boston

Pointe LM (2002) The Michigan cyber court: a bold experiment in the development of the first public virtual courthouse. North Carolina J Law Technol 4(1). Article 5):51–92

Rawson B (2004) The case for the technology-laden courtroom. Courtroom 21 project. Technology White Paper

Reiling D (2010) Technology for justice: how information technology can support judicial reform. Leiden University Press, Reiling, Leiden

Saris WE, Gallhofer IN (2014) Design, evaluation, and analysis of questionnaires for survey research, Wiley series in Survey Methodology, 2nd edn. Wiley, New Jersey

Schofield D (2007) Using graphical technology to present evidence. In: Mason S (ed) Electronic Evidence: Disclosure, Discovery and Admissibility, vol 1, pp 101–121

Schofield D (2009) Animating evidence: computer game technology in the courtroom. J Inf Law Technol 1:1–21

Schofield D (2011) Playing with evidence: using video games in the courtroom. Entertainment Comput 2(1):47–58

Schofield D (2016) The use of computer generated imagery in legal proceedings. Digit Evid Electron Signature Law Rev 13:3–25

Schofield D, Fowle KG (2013) Technology corner visualising forensic data: evidence (part 1). J Digit Forensic Secur Law 8(1):73–90

Selbak J (1994) Digital Litigation: The Prejudicial Effects of Computer-Generated Animation in the Courtroom. High Technol Law J 9(2):337–367

Wiggins EC (2006) The courtroom of the future is here: introduction to emerging technologies in the legal system. Law Policy 28(2):182–191

Download references

Acknowledgements

The authors wish to thank the participants who took part in this study.

This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors.

Author information

Authors and affiliations.

Liverpool John Moores University, Pharmacy and Biomolecular Sciences, James Parsons Building, Byrom Street, Liverpool, L3 3AF, UK

K. Sheppard

Department of Forensic and Crime Sciences, Faculty of Computing, Engineering and Sciences, Science Centre, Staffordshire University, Leek Road, Stoke-on-Trent, Staffordshire, ST4 2DF, UK

S. J. Fieldhouse & J. P. Cassella

You can also search for this author in PubMed   Google Scholar

Contributions

KS collected, analysed and interpreted the participant data regarding the use of technology in the criminal justice system with assistance from SF and JP. All authors were contributors in writing the manuscript and reading and approving the final manuscript.

Corresponding author

Correspondence to K. Sheppard .

Ethics declarations

Ethics approval and consent to participate.

This study was considered using agreed university procedures and was approved by Staffordshire University.

Consent for publication

All data collected from the questionnaires was anonymised and it is not possible to identify the individuals who took part in the study through their statements or quotes. Participants were asked to sign a consent form stating that they understand that the data collected during the study would be anonymised prior to any publication.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Sheppard, K., Fieldhouse, S.J. & Cassella, J.P. Experiences of evidence presentation in court: an insight into the practice of crime scene examiners in England, Wales and Australia. Egypt J Forensic Sci 10 , 8 (2020). https://doi.org/10.1186/s41935-020-00184-5

Download citation

Received : 19 July 2019

Accepted : 17 February 2020

Published : 02 March 2020

DOI : https://doi.org/10.1186/s41935-020-00184-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Courtroom evidence
  • 360° photography
  • Evidence presentation

presentation of evidence definition

Cambridge Dictionary

  • Cambridge Dictionary +Plus

presentation of evidence

Meanings of presentation and evidence.

Your browser doesn't support HTML5 audio

(Definition of presentation and evidence from the Cambridge English Dictionary © Cambridge University Press)

  • Examples of presentation of evidence

{{randomImageQuizHook.quizId}}

Word of the Day

to build a nest, or live in a nest

Worse than or worst of all? How to use the words ‘worse’ and ‘worst’

Worse than or worst of all? How to use the words ‘worse’ and ‘worst’

presentation of evidence definition

Learn more with +Plus

  • Recent and Recommended {{#preferredDictionaries}} {{name}} {{/preferredDictionaries}}
  • Definitions Clear explanations of natural written and spoken English English Learner’s Dictionary Essential British English Essential American English
  • Grammar and thesaurus Usage explanations of natural written and spoken English Grammar Thesaurus
  • Pronunciation British and American pronunciations with audio English Pronunciation
  • English–Chinese (Simplified) Chinese (Simplified)–English
  • English–Chinese (Traditional) Chinese (Traditional)–English
  • English–Dutch Dutch–English
  • English–French French–English
  • English–German German–English
  • English–Indonesian Indonesian–English
  • English–Italian Italian–English
  • English–Japanese Japanese–English
  • English–Norwegian Norwegian–English
  • English–Polish Polish–English
  • English–Portuguese Portuguese–English
  • English–Spanish Spanish–English
  • English–Swedish Swedish–English
  • Dictionary +Plus Word Lists

{{message}}

There was a problem sending your report.

  • Definition of presentation
  • Definition of evidence
  • Other collocations with evidence
  • Other collocations with presentation
  • Find a Lawyer
  • Ask a Lawyer
  • Research the Law
  • Law Schools
  • Laws & Regs
  • Newsletters
  • Justia Connect
  • Pro Membership
  • Basic Membership
  • Justia Lawyer Directory
  • Platinum Placements
  • Gold Placements
  • Justia Elevate
  • Justia Amplify
  • PPC Management
  • Google Business Profile
  • Social Media
  • Justia Onward Blog

evidence in chief

  • It's the main set of facts or proof presented by one side to establish their argument or claim
  • The lawyer prepared thoroughly for the presentation of the evidence in chief.
  • The judge reminded the party that any omission in the evidence in chief could be detrimental to their case.
  • It's common for the plurality of a trial to be taken up by the presentation of evidence in chief.
  • Bankruptcy Lawyers
  • Business Lawyers
  • Criminal Lawyers
  • Employment Lawyers
  • Estate Planning Lawyers
  • Family Lawyers
  • Personal Injury Lawyers
  • Estate Planning
  • Personal Injury
  • Business Formation
  • Business Operations
  • Intellectual Property
  • International Trade
  • Real Estate
  • Financial Aid
  • Course Outlines
  • Law Journals
  • US Constitution
  • Regulations
  • Supreme Court
  • Circuit Courts
  • District Courts
  • Dockets & Filings
  • State Constitutions
  • State Codes
  • State Case Law
  • Legal Blogs
  • Business Forms
  • Product Recalls
  • Justia Connect Membership
  • Justia Premium Placements
  • Justia Elevate (SEO, Websites)
  • Justia Amplify (PPC, GBP)
  • Testimonials

W3C

Verifiable Credentials Data Model v1.1

W3C Recommendation 03 March 2022

See also translations .

Copyright © 2022 W3C ® ( MIT , ERCIM , Keio , Beihang ). W3C liability , trademark and permissive document license rules apply.

Credentials are a part of our daily lives; driver's licenses are used to assert that we are capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. This specification provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable.

Status of This Document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

Comments regarding this specification are welcome at any time, but readers should be aware that the comment period regarding this specific version of the document have ended and the Working Group will not be making substantive modifications to this version of the specification at this stage. Please file issues directly on GitHub , or send them to [email protected] ( subscribe , archives ).

The Working Group has received implementation feedback showing that there are at least two implementations for each normative feature in the specification. The group has obtained reports from fourteen (14) implementations. For details, see the test suite and implementation report .

This document was published by the Verifiable Credentials Working Group as a Recommendation using the Recommendation track .

W3C recommends the wide deployment of this specification as a standard for the Web.

A W3C Recommendation is a specification that, after extensive consensus-building, is endorsed by W3C and its Members, and has commitments from Working Group members to royalty-free licensing for implementations.

This document was produced by a group operating under the W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .

This document is governed by the 2 November 2021 W3C Process Document .

1. Introduction

This section is non-normative.

Credentials are a part of our daily lives; driver's licenses are used to assert that we are capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. These credentials provide benefits to us when used in the physical world, but their use on the Web continues to be elusive.

Currently it is difficult to express education qualifications, healthcare data, financial account details, and other sorts of third-party verified machine-readable personal information on the Web. The difficulty of expressing digital credentials on the Web makes it challenging to receive the same benefits through the Web that physical credentials provide us in the physical world.

This specification provides a standard way to express credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable.

For those unfamiliar with the concepts related to verifiable credentials , the following sections provide an overview of:

  • The components that constitute a verifiable credential
  • The components that constitute a verifiable presentation
  • An ecosystem where verifiable credentials and verifiable presentations are expected to be useful
  • The use cases and requirements that informed this specification.

1.1 What is a Verifiable Credential?

In the physical world, a credential might consist of:

  • Information related to identifying the subject of the credential (for example, a photo, name, or identification number)
  • Information related to the issuing authority (for example, a city government, national agency, or certification body)
  • Information related to the type of credential this is (for example, a Dutch passport, an American driving license, or a health insurance card)
  • Information related to specific attributes or properties being asserted by the issuing authority about the subject (for example, nationality, the classes of vehicle entitled to drive, or date of birth)
  • Evidence related to how the credential was derived
  • Information related to constraints on the credential (for example, expiration date, or terms of use).

A verifiable credential can represent all of the same information that a physical credential represents. The addition of technologies, such as digital signatures, makes verifiable credentials more tamper-evident and more trustworthy than their physical counterparts.

Holders of verifiable credentials can generate verifiable presentations and then share these verifiable presentations with verifiers to prove they possess verifiable credentials with certain characteristics.

Both verifiable credentials and verifiable presentations can be transmitted rapidly, making them more convenient than their physical counterparts when trying to establish trust at a distance.

While this specification attempts to improve the ease of expressing digital credentials , it also attempts to balance this goal with a number of privacy-preserving goals. The persistence of digital information, and the ease with which disparate sources of digital data can be collected and correlated, comprise a privacy concern that the use of verifiable and easily machine-readable credentials threatens to make worse. This document outlines and attempts to address a number of these issues in Section 7. Privacy Considerations . Examples of how to use this data model using privacy-enhancing technologies, such as zero-knowledge proofs, are also provided throughout this document.

The word "verifiable" in the terms verifiable credential and verifiable presentation refers to the characteristic of a credential or presentation as being able to be verified by a verifier , as defined in this document. Verifiability of a credential does not imply that the truth of claims encoded therein can be evaluated; however, the issuer can include values in the evidence property to help the verifier apply their business logic to determine whether the claims have sufficient veracity for their needs.

1.2 Ecosystem Overview

This section describes the roles of the core actors and the relationships between them in an ecosystem where verifiable credentials are expected to be useful. A role is an abstraction that might be implemented in many different ways. The separation of roles suggests likely interfaces and protocols for standardization. The following roles are introduced in this specification:

Figure 1 above provides an example ecosystem in which to ground the rest of the concepts in this specification. Other ecosystems exist, such as protected environments or proprietary systems, where verifiable credentials also provide benefit.

1.3 Use Cases and Requirements

The Verifiable Credentials Use Cases document [ VC-USE-CASES ] outlines a number of key topics that readers might find useful, including:

  • A more thorough explanation of the roles introduced above
  • The needs identified in market verticals, such as education, finance, healthcare, retail, professional licensing, and government
  • Common tasks performed by the roles in the ecosystem, as well as their associated requirements
  • Common sequences and flows identified by the Working Group.

As a result of documenting and analyzing the use cases document, the following desirable ecosystem characteristics were identified for this specification:

  • Credentials represent statements made by an issuer .
  • Verifiable credentials represent statements made by an issuer in a tamper-evident and privacy-respecting manner.
  • Holders assemble collections of credentials and/or verifiable credentials from different issuers into a single artifact, a presentation .
  • Holders transform presentations into verifiable presentations to render them tamper-evident.
  • Issuers can issue verifiable credentials about any subject .
  • Acting as issuer , holder , or verifier requires neither registration nor approval by any authority, as the trust involved is bilateral between parties.
  • Verifiable presentations allow any verifier to verify the authenticity of verifiable credentials from any issuer .
  • Holders can receive verifiable credentials from anyone.
  • Holders can interact with any issuer and any verifier through any user agent.
  • Holders can share verifiable presentations , which can then be verified without revealing the identity of the verifier to the issuer .
  • Holders can store verifiable credentials in any location, without affecting their verifiability and without the issuer knowing anything about where they are stored or when they are accessed.
  • Holders can present verifiable presentations to any verifier without affecting authenticity of the claims and without revealing that action to the issuer .
  • A verifier can verify verifiable presentations from any holder , containing proofs of claims from any issuer .
  • Verification should not depend on direct interactions between issuers and verifiers .
  • Verification should not reveal the identity of the verifier to any issuer .
  • The specification must provide a means for issuers to issue verifiable credentials that support selective disclosure, without requiring all conformant software to support that feature.
  • Issuers can issue verifiable credentials that support selective disclosure.
  • If a single verifiable credential supports selective disclosure, then holders can present proofs of claims without revealing the entire verifiable credential .
  • Verifiable presentations can either disclose the attributes of a verifiable credential , or satisfy derived predicates requested by the verifier . Derived predicates are Boolean conditions, such as greater than, less than, equal to, is in set, and so on.
  • Issuers can issue revocable verifiable credentials .
  • The processes of cryptographically protecting credentials and presentations , and verifying verifiable credentials and verifiable presentations , have to be deterministic, bi-directional, and lossless. Any verification of a verifiable credential or verifiable presentation has to be transformable to the generic data model defined in this document in a deterministic process, such that the resulting credential or presentation is semantically and syntactically equivalent to the original construct, so that it can be processed in an interoperable fashion.
  • Verifiable credentials and verifiable presentations have to be serializable in one or more machine-readable data formats. The process of serialization and/or de-serialization has to be deterministic, bi-directional, and lossless. Any serialization of a verifiable credential or verifiable presentation needs to be transformable to the generic data model defined in this document in a deterministic process such that the resulting verifiable credential can be processed in an interoperable fashion. The serialized form also needs to be able to be generated from the data model without loss of data or content.
  • The data model and serialization must be extendable with minimal coordination.
  • Revocation by the issuer should not reveal any identifying information about the subject , the holder , the specific verifiable credential , or the verifier .
  • Issuers can disclose the revocation reason.
  • Issuers revoking verifiable credentials should distinguish between revocation for cryptographic integrity (for example, the signing key is compromised) versus revocation for a status change (for example, the driver’s license is suspended).
  • Issuers can provide a service for refreshing a verifiable credential .

1.4 Conformance

As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.

The key words MAY , MUST , MUST NOT , RECOMMENDED , and SHOULD in this document are to be interpreted as described in BCP 14 [ RFC2119 ] [ RFC8174 ] when, and only when, they appear in all capitals, as shown here.

A conforming document is any concrete expression of the data model that complies with the normative statements in this specification. Specifically, all relevant normative statements in Sections 4. Basic Concepts , 5. Advanced Concepts , and 6. Syntaxes of this document MUST be enforced. A serialization format for the conforming document MUST be deterministic, bi-directional, and lossless as described in Section 6. Syntaxes . The conforming document MAY be transmitted or stored in any such serialization format.

A conforming processor is any algorithm realized as software and/or hardware that generates or consumes a conforming document . Conforming processors MUST produce errors when non-conforming documents are consumed.

This specification makes no normative statements with regard to the conformance of roles in the ecosystem, such as issuers , holders , or verifiers , because the conformance of ecosystem roles are highly application, use case, and market vertical specific.

Digital proof mechanisms, a subset of which are digital signatures, are required to ensure the protection of a verifiable credential . Having and validating proofs, which may be dependent on the syntax of the proof (for example, using the JSON Web Signature of a JSON Web Token for proofing a key holder), are an essential part of processing a verifiable credential . At the time of publication, Working Group members had implemented verifiable credentials using at least three proof mechanisms:

  • JSON Web Tokens [ RFC7519 ] secured using JSON Web Signatures [ RFC7515 ]
  • Data Integrity Proofs [ DATA-INTEGRITY ]
  • Camenisch-Lysyanskaya Zero-Knowledge Proofs [ CL-SIGNATURES ].

Implementers are advised to note that not all proof mechanisms are standardized as of the publication date of this specification. The group expects some of these mechanisms, as well as new ones, to mature independently and become standardized in time. Given there are multiple valid proof mechanisms, this specification does not standardize on any single digital signature mechanism. One of the goals of this specification is to provide a data model that can be protected by a variety of current and future digital proof mechanisms. Conformance to this specification does not depend on the details of a particular proof mechanism; it requires clearly identifying the mechanism a verifiable credential uses.

This document also contains examples that contain JSON and JSON-LD content. Some of these examples contain characters that are invalid JSON, such as inline comments ( // ) and the use of ellipsis ( ... ) to denote information that adds little value to the example. Implementers are cautioned to remove this content if they desire to use the information as valid JSON or JSON-LD.

2. Terminology

The following terms are used to describe concepts in this specification.

3. Core Data Model

The following sections outline core data model concepts, such as claims , credentials , and presentations , which form the foundation of this specification.

A claim is a statement about a subject . A subject is a thing about which claims can be made. Claims are expressed using subject - property - value relationships.

The data model for claims , illustrated in Figure 2 above, is powerful and can be used to express a large variety of statements. For example, whether someone graduated from a particular university can be expressed as shown in Figure 3 below.

Individual claims can be merged together to express a graph of information about a subject . The example shown in Figure 4 below extends the previous claim by adding the claims that Pat knows Sam and that Sam is employed as a professor.

To this point, the concepts of a claim and a graph of information are introduced. To be able to trust claims , more information is expected to be added to the graph.

3.2 Credentials

A credential is a set of one or more claims made by the same entity . Credentials might also include an identifier and metadata to describe properties of the credential , such as the issuer , the expiry date and time, a representative image, a public key to use for verification purposes, the revocation mechanism, and so on. The metadata might be signed by the issuer . A verifiable credential is a set of tamper-evident claims and metadata that cryptographically prove who issued it.

Examples of verifiable credentials include digital employee identification cards, digital birth certificates, and digital educational certificates.

Credential identifiers are often used to identify specific instances of a credential . These identifiers can also be used for correlation. A holder wanting to minimize correlation is advised to use a selective disclosure scheme that does not reveal the credential identifier.

Figure 5 above shows the basic components of a verifiable credential , but abstracts the details about how claims are organized into information graphs , which are then organized into verifiable credentials . Figure 6 below shows a more complete depiction of a verifiable credential , which is normally composed of at least two information graphs . The first graph expresses the verifiable credential itself, which contains credential metadata and claims . The second graph expresses the digital proof, which is usually a digital signature.

It is possible to have a credential , such as a marriage certificate, containing multiple claims about different subjects that are not required to be related.

It is possible to have a credential that does not contain any claims about the entity to which the credential was issued. For example, a credential that only contains claims about a specific dog, but is issued to its owner.

3.3 Presentations

Enhancing privacy is a key design feature of this specification. Therefore, it is important for entities using this technology to be able to express only the portions of their persona that are appropriate for a given situation. The expression of a subset of one's persona is called a verifiable presentation . Examples of different personas include a person's professional persona, their online gaming persona, their family persona, or an incognito persona.

A verifiable presentation expresses data from one or more verifiable credentials , and is packaged in such a way that the authorship of the data is verifiable . If verifiable credentials are presented directly, they become verifiable presentations . Data formats derived from verifiable credentials that are cryptographically verifiable , but do not of themselves contain verifiable credentials , might also be verifiable presentations .

The data in a presentation is often about the same subject , but might have been issued by multiple issuers . The aggregation of this information typically expresses an aspect of a person, organization, or entity .

Figure 7 above shows the components of a verifiable presentation , but abstracts the details about how verifiable credentials are organized into information graphs , which are then organized into verifiable presentations .

Figure 8 below shows a more complete depiction of a verifiable presentation , which is normally composed of at least four information graphs . The first of these information graphs , the Presentation Graph , expresses the verifiable presentation itself, which contains presentation metadata. The verifiableCredential property in the Presentation Graph refers to one or more verifiable credentials , each being one of the second information graphs , i.e., a self-contained Credential Graph , which in turn contains credential metadata and claims. The third information graph , the Credential Proof Graph , expresses the credential graph proof, which is usually a digital signature. The fourth information graph , the Presentation Proof Graph , expresses the presentation graph proof, which is usually a digital signature.

It is possible to have a presentation , such as a business persona, which draws on multiple credentials about different subjects that are often, but not required to be, related.

3.4 Concrete Lifecycle Example

The previous sections introduced the concepts of claims , verifiable credentials , and verifiable presentations using graphical depictions. This section provides a concrete set of simple but complete lifecycle examples of the data model expressed in one of the concrete syntaxes supported by this specification. The lifecycle of credentials and presentations in the Verifiable Credentials Ecosystem often take a common path:

  • Issuance of one or more verifiable credentials .
  • Storage of verifiable credentials in a credential repository (such as a digital wallet).
  • Composition of verifiable credentials into a verifiable presentation for verifiers .
  • Verification of the verifiable presentation by the verifier .

To illustrate this lifecycle, we will use the example of redeeming an alumni discount from a university. In the example below, Pat receives an alumni verifiable credential from a university, and Pat stores the verifiable credential in a digital wallet.

Pat then attempts to redeem the alumni discount. The verifier , a ticket sales system, states that any alumni of "Example University" receives a discount on season tickets to sporting events. Using a mobile device, Pat starts the process of purchasing a season ticket. A step in this process requests an alumni verifiable credential , and this request is routed to Pat's digital wallet. The digital wallet asks Pat if they would like to provide a previously issued verifiable credential . Pat selects the alumni verifiable credential , which is then composed into a verifiable presentation . The verifiable presentation is sent to the verifier and verified .

Implementers that are interested in understanding more about the proof mechanism used above can learn more in Section 4.7 Proofs (Signatures) and by reading the following specifications: Data Integrity [ DATA-INTEGRITY ], Linked Data Cryptographic Suites Registry [ LDP-REGISTRY ], and JSON Web Signature (JWS) Unencoded Payload Option [ RFC7797 ]. A list of proof mechanisms is available in the Verifiable Credentials Extension Registry [ VC-EXTENSION-REGISTRY ].

4. Basic Concepts

This section introduces some basic concepts for the specification, in preparation for Section 5. Advanced Concepts later in the document.

4.1 Contexts

When two software systems need to exchange data, they need to use terminology that both systems understand. As an analogy, consider how two people communicate. Both people must use the same language and the words they use must mean the same thing to each other. This might be referred to as the context of a conversation .

Verifiable credentials and verifiable presentations have many attributes and values that are identified by URIs [ RFC3986 ]. However, those URIs can be long and not very human-friendly. In such cases, short-form human-friendly aliases can be more helpful. This specification uses the @context property to map such short-form aliases to the URIs required by specific verifiable credentials and verifiable presentations .

In JSON-LD, the @context property can also be used to communicate other details, such as datatype information, language information, transformation rules, and so on, which are beyond the needs of this specification, but might be useful in the future or to related work. For more information, see Section 3.1: The Context of the [ JSON-LD ] specification.

Verifiable credentials and verifiable presentations MUST include a @context property .

Though this specification requires that a @context property be present, it is not required that the value of the @context property be processed using JSON-LD. This is to support processing using plain JSON libraries, such as those that might be used when the verifiable credential is encoded as a JWT. All libraries or processors MUST ensure that the order of the values in the @context property is what is expected for the specific application. Libraries or processors that support JSON-LD can process the @context property using full JSON-LD processing as expected.

The example above uses the base context URI ( https://www.w3.org/2018/credentials/v1 ) to establish that the conversation is about a verifiable credential . The second URI ( https://www.w3.org/2018/credentials/examples/v1 ) establishes that the conversation is about examples.

This document uses the example context URI ( https://www.w3.org/2018/credentials/examples/v1 ) for the purpose of demonstrating examples. Implementations are expected to not use this URI for any other purpose, such as in pilot or production systems.

The data available at https://www.w3.org/2018/credentials/v1 is a static document that is never updated and SHOULD be downloaded and cached. The associated human-readable vocabulary document for the Verifiable Credentials Data Model is available at https://www.w3.org/2018/credentials/ . This concept is further expanded on in Section 5.3 Extensibility .

4.2 Identifiers

When expressing statements about a specific thing, such as a person, product, or organization, it is often useful to use some kind of identifier so that others can express statements about the same thing. This specification defines the optional id property for such identifiers. The id property is intended to unambiguously refer to an object, such as a person, product, or organization. Using the id property allows for the expression of statements about specific things in the verifiable credential .

If the id property is present:

  • The id property MUST express an identifier that others are expected to use when expressing statements about a specific thing identified by that identifier.
  • The id property MUST NOT have more than one value.
  • The value of the id property MUST be a URI .

Developers should remember that identifiers might be harmful in scenarios where pseudonymity is required. Developers are encouraged to read Section 7.3 Identifier-Based Correlation carefully when considering such scenarios. There are also other types of correlation mechanisms documented in Section 7. Privacy Considerations that create privacy concerns. Where privacy is a strong consideration, the id property MAY be omitted.

The example above uses two types of identifiers. The first identifier is for the verifiable credential and uses an HTTP-based URL. The second identifier is for the subject of the verifiable credential (the thing the claims are about) and uses a decentralized identifier , also known as a DID .

As of this publication, DIDs are a new type of identifier that are not necessary for verifiable credentials to be useful. Specifically, verifiable credentials do not depend on DIDs and DIDs do not depend on verifiable credentials . However, it is expected that many verifiable credentials will use DIDs and that software libraries implementing this specification will probably need to resolve DIDs . DID -based URLs are used for expressing identifiers associated with subjects , issuers , holders , credential status lists, cryptographic keys, and other machine-readable information associated with a verifiable credential .

Software systems that process the kinds of objects specified in this document use type information to determine whether or not a provided verifiable credential or verifiable presentation is appropriate. This specification defines a type property for the expression of type information.

Verifiable credentials and verifiable presentations MUST have a type property . That is, any credential or presentation that does not have type property is not verifiable , so is neither a verifiable credential nor a verifiable presentation .

With respect to this specification, the following table lists the objects that MUST have a type specified.

Object Type
 object
(a subclass of a  object)
and, optionally, a more specific . For example,
 object and, optionally, a more specific . For example,
 object
(a subclass of a  object)
and, optionally, a more specific . For example,
 object and, optionally, a more specific . For example,
 object A valid proof . For example,
 object A valid status . For example,
 object A valid terms of use . For example,
)
 object A valid evidence . For example,

The type system for the Verifiable Credentials Data Model is the same as for [ JSON-LD ] and is detailed in Section 5.4: Specifying the Type and Section 8: JSON-LD Grammar . When using a JSON-LD context (see Section 5.3 Extensibility ), this specification aliases the @type keyword to type to make the JSON-LD documents more easily understood. While application developers and document authors do not need to understand the specifics of the JSON-LD type system, implementers of this specification who want to support interoperable extensibility, do.

All credentials , presentations , and encapsulated objects MUST specify, or be associated with, additional more narrow types (like UniversityDegreeCredential , for example) so software systems can process this additional information.

When processing encapsulated objects defined in this specification, (for example, objects associated with the credentialSubject object or deeply nested therein), software systems SHOULD use the type information specified in encapsulating objects higher in the hierarchy. Specifically, an encapsulating object, such as a credential , SHOULD convey the associated object types so that verifiers can quickly determine the contents of an associated object based on the encapsulating object type .

For example, a credential object with the type of UniversityDegreeCredential , signals to a verifier that the object associated with the credentialSubject property contains the identifier for the:

  • Subject in the id property.
  • Type of degree in the type property.
  • Title of the degree in the name property.

This enables implementers to rely on values associated with the type property for verification purposes. The expectation of types and their associated properties should be documented in at least a human-readable specification, and preferably, in an additional machine-readable representation.

The type system used in the data model described in this specification allows for multiple ways to associate types with data. Implementers and authors are urged to read the section on typing in the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ].

4.4 Credential Subject

A verifiable credential contains claims about one or more subjects . This specification defines a credentialSubject property for the expression of claims about one or more subjects .

A verifiable credential MUST have a credentialSubject property .

It is possible to express information related to multiple subjects in a verifiable credential . The example below specifies two subjects who are spouses. Note the use of array notation to associate multiple subjects with the credentialSubject property.

This specification defines a property for expressing the issuer of a verifiable credential .

A verifiable credential MUST have an issuer property .

It is also possible to express additional information about the issuer by associating an object with the issuer property:

The value of the issuer property can also be a JWK (for example, "https://example.com/keys/foo.jwk" ) or a DID (for example, "did:example:abfe13f712120431c276e12ecab" ).

4.6 Issuance Date

This specification defines the issuanceDate property for expressing the date and time when a credential becomes valid.

It is expected that the next version of this specification will add the validFrom property and will deprecate the issuanceDate property in favor of a new issued property . The range of values for both properties are expected to remain as [ XMLSCHEMA11-2 ] combined date-time strings. Implementers are advised that the validFrom and issued properties are reserved and use for any other purpose is discouraged.

4.7 Proofs (Signatures)

At least one proof mechanism, and the details necessary to evaluate that proof, MUST be expressed for a credential or presentation to be a verifiable credential or verifiable presentation ; that is, to be verifiable .

This specification identifies two classes of proof mechanisms: external proofs and embedded proofs. An external proof is one that wraps an expression of this data model, such as a JSON Web Token, which is elaborated on in Section 6.3.1 JSON Web Token . An embedded proof is a mechanism where the proof is included in the data, such as a Linked Data Signature, which is elaborated upon in Section 6.3.2 Data Integrity Proofs .

When embedding a proof, the proof property MUST be used.

Because the method used for a mathematical proof varies by representation language and the technology used, the set of name-value pairs that is expected as the value of the proof property will vary accordingly. For example, if digital signatures are used for the proof mechanism, the proof property is expected to have name-value pairs that include a signature, a reference to the signing entity, and a representation of the signing date. The example below uses RSA digital signatures.

As discussed in Section 1.4 Conformance , there are multiple viable proof mechanisms, and this specification does not standardize nor recommend any single proof mechanism for use with verifiable credentials . For more information about the proof mechanism, see the following specifications: Data Integrity [ DATA-INTEGRITY ], Linked Data Cryptographic Suites Registries [ LDP-REGISTRY ], and JSON Web Signature (JWS) Unencoded Payload Option [ RFC7797 ]. A list of proof mechanisms is available in the Verifiable Credentials Extension Registry [ VC-EXTENSION-REGISTRY ].

4.8 Expiration

This specification defines the expirationDate property for the expression of credential expiration information.

It is expected that the next version of this specification will add the validUntil property in a way that deprecates, but preserves backwards compatibility with the expirationDate property . Implementers are advised that the validUntil property is reserved and its use for any other purpose is discouraged.

This specification defines the following credentialStatus property for the discovery of information about the current status of a verifiable credential , such as whether it is suspended or revoked.

  • id property , which MUST be a URI .
  • type property , which expresses the credential status type (also referred to as the credential status method). It is expected that the value will provide enough information to determine the current status of the credential and that machine readable information needs to be retrievable from the URI. For example, the object could contain a link to an external document noting whether or not the credential is suspended or revoked.

The precise contents of the credential status information is determined by the specific credentialStatus type definition, and varies depending on factors such as whether it is simple to implement or if it is privacy-enhancing.

Defining the data model, formats, and protocols for status schemes are out of scope for this specification. A Verifiable Credential Extension Registry [ VC-EXTENSION-REGISTRY ] exists that contains available status schemes for implementers who want to implement verifiable credential status checking.

4.10 Presentations

Presentations MAY be used to combine and present credentials . They can be packaged in such a way that the authorship of the data is verifiable . The data in a presentation is often all about the same subject , but there is no limit to the number of subjects or issuers in the data. The aggregation of information from multiple verifiable credentials is a typical use of verifiable presentations .

A verifiable presentation is typically composed of the following properties:

The example below shows a verifiable presentation that embeds verifiable credentials .

The contents of the verifiableCredential property shown above are verifiable credentials , as described by this specification. The contents of the proof property are proofs, as described by the Data Integrity [ DATA-INTEGRITY ] specification. An example of a verifiable presentation using the JWT proof mechanism is given in section 6.3.1 JSON Web Token .

4.10.1 Presentations Using Derived Credentials

Some zero-knowledge cryptography schemes might enable holders to indirectly prove they hold claims from a verifiable credential without revealing the verifiable credential itself. In these schemes, a claim from a verifiable credential might be used to derive a presented value, which is cryptographically asserted such that a verifier can trust the value if they trust the issuer .

For example, a verifiable credential containing the claim date of birth might be used to derive the presented value over the age of 15 in a manner that is cryptographically verifiable . That is, a verifier can still trust the derived value if they trust the issuer .

For an example of a ZKP-style verifiable presentation containing derived data instead of directly embedded verifiable credentials , see Section 5.8 Zero-Knowledge Proofs .

Selective disclosure schemes using zero-knowledge proofs can use claims expressed in this model to prove additional statements about those claims . For example, a claim specifying a subject's date of birth can be used as a predicate to prove the subject's age is within a given range, and therefore prove the subject qualifies for age-related discounts, without actually revealing the subject's birthdate. The holder has the flexibility to use the claim in any way that is applicable to the desired verifiable presentation .

5. Advanced Concepts

Building on the concepts introduced in Section 4. Basic Concepts , this section explores more complex topics about verifiable credentials .

5.1 Lifecycle Details

Section 1.2 Ecosystem Overview provided an overview of the verifiable credential ecosystem. This section provides more detail about how the ecosystem is envisaged to operate.

The roles and information flows in the verifiable credential ecosystem are as follows:

  • An issuer issues a verifiable credential to a holder . Issuance always occurs before any other actions involving a credential .
  • A holder might transfer one or more of its verifiable credentials to another holder .
  • A holder presents one or more of its verifiable credentials to a verifier , optionally inside a verifiable presentation .
  • A verifier verifies the authenticity of the presented verifiable presentation and verifiable credentials . This should include checking the credential status for revocation of the verifiable credentials .
  • An issuer might revoke a verifiable credential .
  • A holder might delete a verifiable credential .

The order of the actions above is not fixed, and some actions might be taken more than once. Such action-recurrence might be immediate or at any later point.

The most common sequence of actions is envisioned to be:

  • An issuer issues to a holder .
  • The holder presents to a verifier .
  • The verifier verifies .

This specification does not define any protocol for transferring verifiable credentials or verifiable presentations , but assuming other specifications do specify how they are transferred between entities, then this Verifiable Credential Data Model is directly applicable.

This specification also does not define an authorization framework nor the decisions that a verifier might make after verifying a verifiable credential or verifiable presentation , taking into account the holder , the issuers of the verifiable credentials , the contents of the verifiable credentials , and its own policies.

In particular, Sections 5.6 Terms of Use and C. Subject-Holder Relationships specify how a verifier can determine:

  • Whether the holder is a subject of a verifiable credential .
  • The relationship between the subject and the holder .
  • Whether the original holder passed a verifiable credential to a subsequent holder .
  • Any restrictions using the verifiable credentials by the holder or verifier .

5.2 Trust Model

The verifiable credentials trust model is as follows:

  • Include a proof establishing that the issuer generated the credential (that is, it is a verifiable credential ), or
  • Have been transmitted in a way clearly establishing that the issuer generated the verifiable credential and that the verifiable credential was not tampered with in transit or storage. This trust could be weakened depending on the risk assessment of the verifier .
  • All entities trust the verifiable data registry to be tamper-evident and to be a correct record of which data is controlled by which entities .
  • The holder and verifier trust the issuer to issue true (that is, not false) credentials about the subject , and to revoke them quickly when appropriate.
  • The holder trusts the repository to store credentials securely, to not release them to anyone other than the holder , and to not corrupt or lose them while they are in its care.

This trust model differentiates itself from other trust models by ensuring the:

  • Issuer and the verifier do not need to trust the repository
  • Issuer does not need to know or trust the verifier .

By decoupling the trust between the identity provider and the relying party a more flexible and dynamic trust model is created such that market competition and customer choice is increased.

For more information about how this trust model interacts with various threat models studied by the Working Group, see the Verifiable Credentials Use Cases document [ VC-USE-CASES ].

The data model detailed in this specification does not imply a transitive trust model, such as that provided by more traditional Certificate Authority trust models. In the Verifiable Credentials Data Model, a verifier either directly trusts or does not trust an issuer . While it is possible to build transitive trust models using the Verifiable Credentials Data Model, implementers are urged to learn about the security weaknesses introduced by broadly delegating trust in the manner adopted by Certificate Authority systems.

5.3 Extensibility

One of the goals of the Verifiable Credentials Data Model is to enable permissionless innovation. To achieve this, the data model needs to be extensible in a number of different ways. The data model is required to:

  • Model complex multi-entity relationships through the use of a graph -based data model.
  • Extend the machine-readable vocabularies used to describe information in the data model, without the use of a centralized system for doing so, through the use of [ LINKED-DATA ].
  • Support multiple types of cryptographic proof formats through the use of Data Integrity Proofs [ DATA-INTEGRITY ] and a variety of signature suites listed in the Linked Data Cryptographic Suites Registry [ LDP-REGISTRY ]
  • Provide all of the extensibility mechanisms outlined above in a data format that is popular with software developers and web page authors, and is enabled through the use of [ JSON-LD ].

This approach to data modeling is often called an open world assumption , meaning that any entity can say anything about any other entity. While this approach seems to conflict with building simple and predictable software systems, balancing extensibility with program correctness is always more challenging with an open world assumption than with closed software systems.

The rest of this section describes, through a series of examples, how both extensibility and program correctness are achieved.

Let us assume we start with the verifiable credential shown below.

This verifiable credential states that the entity associated with did:example:abcdef1234567 has a name with a value of Jane Doe .

Now let us assume a developer wants to extend the verifiable credential to store two additional pieces of information: an internal corporate reference number, and Jane's favorite food.

The first thing to do is to create a JSON-LD context containing two new terms, as shown below.

After this JSON-LD context is created, the developer publishes it somewhere so it is accessible to verifiers who will be processing the verifiable credential . Assuming the above JSON-LD context is published at https://example.com/contexts/mycontext.jsonld , we can extend this example by including the context and adding the new properties and credential type to the verifiable credential .

This example demonstrates extending the Verifiable Credentials Data Model in a permissionless and decentralized way. The mechanism shown also ensures that verifiable credentials created in this way provide a mechanism to prevent namespace conflicts and semantic ambiguity.

A dynamic extensibility model such as this does increase the implementation burden. Software written for such a system has to determine whether verifiable credentials with extensions are acceptable based on the risk profile of the application. Some applications might accept only certain extensions while highly secure environments might not accept any extensions. These decisions are up to the developers of these applications and are specifically not the domain of this specification.

Developers are urged to ensure that extension JSON-LD contexts are highly available. Implementations that cannot fetch a context will produce an error. Strategies for ensuring that extension JSON-LD contexts are always available include using content-addressed URLs for contexts, bundling context documents with implementations, or enabling aggressive caching of contexts.

Implementers are advised to pay close attention to the extension points in this specification, such as in Sections 4.7 Proofs (Signatures) , 4.9 Status , 5.4 Data Schemas , 5.5 Refreshing , 5.6 Terms of Use , and 5.7 Evidence . While this specification does not define concrete implementations for those extension points, the Verifiable Credentials Extension Registry [ VC-EXTENSION-REGISTRY ] provides an unofficial, curated list of extensions that developers can use from these extension points.

5.3.1 Semantic Interoperability

This specification ensures that "plain" JSON and JSON-LD syntaxes are semantically compatible without requiring JSON implementations to use a JSON-LD processor. To achieve this, the specification imposes the following additional requirements on both syntaxes:

  • JSON-based processors MUST process the @context key, ensuring the expected values exist in the expected order for the credential type being processed. The order is important because keys used in a credential , which are defined using the values associated with @context , are defined using a "first defined wins" mechanism and changing the order might result in a different key definition "winning".
  • JSON-LD-based processors MUST produce an error when a JSON-LD context redefines any term in the active context . The only way to change the definition of existing terms is to introduce a new term that clears the active context within the scope of that new term. Authors that are interested in this feature should read about the @protected feature in the JSON-LD 1.1 specification.

A human-readable document describing the expected order of values for the @context property is expected to be published by any implementer seeking interoperability. A machine-readable description (that is, a normal JSON-LD Context document) is expected to be published at the URL specified in the @context property by JSON-LD implementers seeking interoperability.

The requirements above guarantee semantic interoperability between JSON and JSON-LD for terms defined by the @context mechanism. While JSON-LD processors will use the specific mechanism provided and can verify that all terms are correctly specified, JSON-based processors implicitly accept the same set of terms without testing that they are correct. In other words, the context in which the data exchange happens is explicitly stated for both JSON and JSON-LD by using the same mechanism. With respect to JSON-based processors, this is achieved in a lightweight manner, without having to use JSON-LD processing libraries.

5.4 Data Schemas

Data schemas are useful when enforcing a specific structure on a given collection of data. There are at least two types of data schemas that this specification considers:

  • Data verification schemas, which are used to verify that the structure and contents of a credential or verifiable credential conform to a published schema.
  • Data encoding schemas, which are used to map the contents of a verifiable credential to an alternative representation format, such as a binary format used in a zero-knowledge proof.

It is important to understand that data schemas serve a different purpose from the @context property, which neither enforces data structure or data syntax, nor enables the definition of arbitrary encodings to alternate representation formats.

This specification defines the following property for the expression of a data schema, which can be included by an issuer in the verifiable credentials that it issues:

The credentialSchema property provides an opportunity to annotate type definitions or lock them to specific versions of the vocabulary. Authors of verifiable credentials can include a static version of their vocabulary using credentialSchema that is locked to some content integrity protection mechanism. The credentialSchema property also makes it possible to perform syntactic checking on the credential and to use verification mechanisms such as JSON Schema [ JSON-SCHEMA-2018 ] validation.

In the example above, the issuer is specifying a credentialSchema , which points to a [ JSON-SCHEMA-2018 ] file that can be used by a verifier to determine if the verifiable credential is well formed.

For information about linkages to JSON Schema [ JSON-SCHEMA-2018 ] or other optional verification mechanisms, see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.

Data schemas can also be used to specify mappings to other binary formats, such as those used to perform zero-knowledge proofs. For more information on using the credentialSchema property with zero-knowledge proofs, see Section 5.8 Zero-Knowledge Proofs .

In the example above, the issuer is specifying a credentialSchema pointing to a zero-knowledge packed binary data format that is capable of transforming the input data into a format, which can then be used by a verifier to determine if the proof provided with the verifiable credential is valid.

5.5 Refreshing

It is useful for systems to enable the manual or automatic refresh of an expired verifiable credential . For more information about expired verifiable credentials , see Section 4.8 Expiration . This specification defines a refreshService property , which enables an issuer to include a link to a refresh service.

The issuer can include the refresh service as an element inside the verifiable credential if it is intended for either the verifier or the holder (or both), or inside the verifiable presentation if it is intended for the holder only. In the latter case, this enables the holder to refresh the verifiable credential before creating a verifiable presentation to share with a verifier . In the former case, including the refresh service inside the verifiable credential enables either the holder or the verifier to perform future updates of the credential .

The refresh service is only expected to be used when either the credential has expired or the issuer does not publish credential status information. Issuers are advised not to put the refreshService property in a verifiable credential that does not contain public information or whose refresh service is not protected in some way.

Placing a refreshService property in a verifiable credential so that it is available to verifiers can remove control and consent from the holder and allow the verifiable credential to be issued directly to the verifier , thereby bypassing the holder .

In the example above, the issuer specifies a manual refreshService that can be used by directing the holder or the verifier to https://example.edu/refresh/3732 .

5.6 Terms of Use

Terms of use can be utilized by an issuer or a holder to communicate the terms under which a verifiable credential or verifiable presentation was issued. The issuer places their terms of use inside the verifiable credential . The holder places their terms of use inside a verifiable presentation . This specification defines a termsOfUse property for expressing terms of use information.

The value of the termsOfUse property tells the verifier what actions it is required to perform (an obligation ), not allowed to perform (a prohibition ), or allowed to perform (a permission ) if it is to accept the verifiable credential or verifiable presentation .

Further study is required to determine how a subject who is not a holder places terms of use on their verifiable credentials . One way could be for the subject to request the issuer to place the terms of use inside the issued verifiable credentials . Another way could be for the subject to delegate a verifiable credential to a holder and place terms of use restrictions on the delegated verifiable credential .

In the example above, the issuer (the assigner ) is prohibiting verifiers (the assignee ) from storing the data in an archive.

Warning: The termsOfUse property is improperly defined within the VerifiablePresentation scoped context. This is a bug with the version 1 context and will be fixed in the version 2 context. In the meantime, implementors who wish to use this feature will be required to extend the context of their verifiable presentation with an additional term that defines the termsOfUse property, which can then be used alongside the verifiable presentation type property, in order for the term to be semantically recognized in a JSON-LD processor.

In the example above, the holder (the assigner ), who is also the subject , expressed a term of use prohibiting the verifier (the assignee , https://wineonline.example.org ) from using the information provided to correlate the holder or subject using a third-party service. If the verifier were to use a third-party service for correlation, they would violate the terms under which the holder created the presentation .

This feature is also expected to be used by government-issued verifiable credentials to instruct digital wallets to limit their use to similar government organizations in an attempt to protect citizens from unexpected usage of sensitive data. Similarly, some verifiable credentials issued by private industry are expected to limit usage to within departments inside the organization, or during business hours. Implementers are urged to read more about this rapidly evolving feature in the appropriate section of the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.

5.7 Evidence

Evidence can be included by an issuer to provide the verifier with additional supporting information in a verifiable credential . This could be used by the verifier to establish the confidence with which it relies on the claims in the verifiable credential .

For example, an issuer could check physical documentation provided by the subject or perform a set of background checks before issuing the credential . In certain scenarios, this information is useful to the verifier when determining the risk associated with relying on a given credential .

This specification defines the evidence property for expressing evidence information.

For information about how attachments and references to credentials and non-credential data might be supported by the specification, see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.

In this evidence example, the issuer is asserting that they physically matched the subject of the credential to a physical copy of a driver's license with the stated license number. This driver's license was used in the issuance process to verify that "Example University" verified the subject before issuance of the credential and how they did so (physical verification).

The evidence property provides different and complementary information to the proof property . The evidence property is used to express supporting information, such as documentary evidence, related to the integrity of the verifiable credential . In contrast, the proof property is used to express machine-verifiable mathematical proofs related to the authenticity of the issuer and integrity of the verifiable credential . For more information about the proof property , see Section 4.7 Proofs (Signatures) .

5.8 Zero-Knowledge Proofs

A zero-knowledge proof is a cryptographic method where an entity can prove to another entity that they know a certain value without disclosing the actual value. A real-world example is proving that an accredited university has granted a degree to you without revealing your identity or any other personally identifiable information contained on the degree.

The key capabilities introduced by zero-knowledge proof mechanisms are the ability of a holder to:

  • Combine multiple verifiable credentials from multiple issuers into a single verifiable presentation without revealing verifiable credential or subject identifiers to the verifier . This makes it more difficult for the verifier to collude with any of the issuers regarding the issued verifiable credentials .
  • Selectively disclose the claims in a verifiable credential to a verifier without requiring the issuance of multiple atomic verifiable credentials . This allows a holder to provide a verifier with precisely the information they need and nothing more.
  • Produce a derived verifiable credential that is formatted according to the verifier's data schema instead of the issuer's , without needing to involve the issuer after verifiable credential issuance. This provides a great deal of flexibility for holders to use their issued verifiable credentials .

This specification describes a data model that supports selective disclosure with the use of zero-knowledge proof mechanisms. The examples below highlight how the data model can be used to issue, present, and verify zero-knowledge verifiable credentials .

For a holder to use a zero-knowledge verifiable presentation , they need an issuer to have issued a verifiable credential in a manner that enables the holder to derive a proof from the originally issued verifiable credential , so that the holder can present the information to a verifier in a privacy-enhancing manner. This implies that the holder can prove the validity of the issuer's signature without revealing the values that were signed, or when only revealing certain selected values. The standard practice is to do so by proving knowledge of the signature, without revealing the signature itself. There are two requirements for verifiable credentials when they are to be used in zero-knowledge proof systems.

  • The verifiable credential MUST contain a Proof, using the proof property , so that the holder can derive a verifiable presentation that reveals only the information than the holder intends to reveal.
  • If a credential definition is being used, the credential definition MUST be defined in the credentialSchema property , so that it can be used by all parties to perform various cryptographic operations in zero-knowledge.

The following example shows one method of using verifiable credentials in zero-knowledge. It makes use of a Camenisch-Lysyanskaya Signature [ CL-SIGNATURES ], which allows the presentation of the verifiable credential in a way that supports the privacy of the holder and subject through the use of selective disclosure of the verifiable credential values. Some other cryptographic systems which rely upon zero-knowledge proofs to selectively disclose attributes can be found in the [ LDP-REGISTRY ] as well.

The example above provides the verifiable credential definition by using the credentialSchema property and a specific proof that is usable in the Camenisch-Lysyanskaya Zero-Knowledge Proof system.

The next example utilizes the verifiable credential above to generate a new derived verifiable credential with a privacy-preserving proof. The derived verifiable credential is then placed in a verifiable presentation , so that the verifiable credential discloses only the claims and additional credential metadata that the holder intended. To do this, all of the following requirements are expected to be met:

  • Each derived verifiable credential within a verifiable presentation MUST contain all information necessary to verify the verifiable credential , either by including it directly within the credential, or by referencing the necessary information.
  • A verifiable presentation MUST NOT leak information that would enable the verifier to correlate the holder across multiple verifiable presentations .
  • The verifiable presentation SHOULD contain a proof property to enable the verifier to check that all derived verifiable credentials in the verifiable presentation were issued to the same holder without leaking personally identifiable information that the holder did not intend to share.

Important details regarding the format for the credential definition and of the proofs are omitted on purpose because they are outside of the scope of this document. The purpose of this section is to guide implementers who want to extend verifiable credentials and verifiable presentations to support zero-knowledge proof systems.

5.9 Disputes

There are at least two different cases to consider for an entity wanting to dispute a credential issued by an issuer :

  • A subject disputes a claim made by the issuer . For example, the address property is incorrect or out of date.
  • An entity disputes a potentially false claim made by the issuer about a different subject . For example, an imposter claims the social security number for an entity .

The mechanism for issuing a DisputeCredential is the same as for a regular credential except that the credentialSubject identifier in the DisputeCredential property is the identifier of the disputed credential .

For example, if a credential with an identifier of https://example.org/credentials/245 is disputed, the subject can issue the credential shown below and present it to the verifier along with the disputed credential .

In the above verifiable credential the issuer is claiming that the address in the disputed verifiable credential is wrong.

If a credential does not have an identifier, a content-addressed identifier can be used to identify the disputed credential . Similarly, content-addressed identifiers can be used to uniquely identify individual claims.

This area of study is rapidly evolving and developers that are interested in publishing credentials that dispute the veracity of other credentials are urged to read the section related to disputes in the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.

5.10 Authorization

Verifiable credentials are intended as a means of reliably identifying subjects . While it is recognized that Role Based Access Controls (RBACs) and Attribute Based Access Controls (ABACs) rely on this identification as a means of authorizing subjects to access resources, this specification does not provide a complete solution for RBAC or ABAC. Authorization is not an appropriate use for this specification without an accompanying authorization framework.

The Working Group did consider authorization use cases during the creation of this specification and is pursuing that work as an architectural layer built on top of this specification.

6. Syntaxes

The data model as described in Sections 3. Core Data Model , 4. Basic Concepts , and 5. Advanced Concepts is the canonical structural representation of a verifiable credential or verifiable presentation . All serializations are representations of that data model in a specific format. This section specifies how the data model is realized in JSON-LD and plain JSON. Although syntactic mappings are provided for only these two syntaxes, applications and services can use any other data representation syntax (such as XML, YAML, or CBOR) that is capable of expressing the data model. As the verification and validation requirements are defined in terms of the data model, all serialization syntaxes have to be deterministically translated to the data model for processing, validation , or comparison. This specification makes no requirements for support of any specific serialization format.

The expected arity of the property values in this specification, and the resulting datatype which holds those values, can vary depending on the property. If present, the following properties are represented as a single value:

  • id property
  • issuer property
  • issuanceDate property
  • expirationDate property .

All other properties, if present, are represented as either a single value or an array of values.

The data model, as described in Section 3. Core Data Model , can be encoded in JavaScript Object Notation (JSON) [ RFC8259 ] by mapping property values to JSON types as follows:

  • Numeric values representable as IEEE754 SHOULD be represented as a Number type.
  • Boolean values SHOULD be represented as a Boolean type.
  • Sequence value SHOULD be represented as an Array type.
  • Unordered sets of values SHOULD be represented as an Array type.
  • Sets of properties SHOULD be represented as an Object type.
  • Empty values SHOULD be represented as a null value.
  • Other values MUST be represented as a String type.

As the transformations listed herein have potentially incompatible interpretations, additional profiling of the JSON format is required to provide a deterministic transformation to the data model.

6.2 JSON-LD

[ JSON-LD ] is a JSON-based format used to serialize Linked Data . The syntax is designed to easily integrate into deployed systems already using JSON, and provides a smooth upgrade path from JSON to [ JSON-LD ]. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.

[ JSON-LD ] is useful when extending the data model described in this specification. Instances of the data model are encoded in [ JSON-LD ] in the same way they are encoded in JSON (Section 6.1 JSON ), with the addition of the @context property . The JSON-LD context is described in detail in the [ JSON-LD ] specification and its use is elaborated on in Section 5.3 Extensibility .

Multiple contexts MAY be used or combined to express any arbitrary information about verifiable credentials in idiomatic JSON. The JSON-LD context , available at https://www.w3.org/2018/credentials/v1 , is a static document that is never updated and can therefore be downloaded and cached client side. The associated vocabulary document for the Verifiable Credentials Data Model is available at https://www.w3.org/2018/credentials .

6.2.1 Syntactic Sugar

In general, the data model and syntaxes described in this document are designed such that developers can copy and paste examples to incorporate verifiable credentials into their software systems. The design goal of this approach is to provide a low barrier to entry while still ensuring global interoperability between a heterogeneous set of software systems. This section describes some of these approaches, which will likely go unnoticed by most developers, but whose details will be of interest to implementers. The most noteworthy syntactic sugars provided by [ JSON-LD ] are:

  • The @id and @type keywords are aliased to id and type respectively, enabling developers to use this specification as idiomatic JSON.
  • Data types, such as integers, dates, units of measure, and URLs, are automatically typed to provide stronger type guarantees for use cases that require them.
  • The verifiableCredential and proof properties are treated as graph containers . That is, mechanisms used to isolate sets of data asserted by different entities. This ensures, for example, proper cryptographic separation between the data graph provided by each issuer and the one provided by the holder presenting the verifiable credential to ensure the provenance of the information for each graph is preserved.
  • The @protected properties feature of [ JSON-LD ] 1.1 is used to ensure that terms defined by this specification cannot be overridden. This means that as long as the same @context declaration is made at the top of a verifiable credential or verifiable presentation , interoperability is guaranteed for all terms understood by users of the data model whether or not they use a [ JSON-LD ] processor.

6.3 Proof Formats

The data model described in this specification is designed to be proof format agnostic. This specification does not normatively require any particular digital proof or signature format. While the data model is the canonical representation of a credential or presentation , the proofing mechanisms for these are often tied to the syntax used in the transmission of the document between parties. As such, each proofing mechanism has to specify whether the verification of the proof is calculated against the state of the document as transmitted, against the possibly transformed data model, or against another form. At the time of publication, at least two proof formats are being actively utilized by implementers and the Working Group felt that documenting what these proof formats are and how they are being used would be beneficial to implementers. The sections detailing the current proof formats being actively utilized to issue verifiable credentials are:

  • Section 6.3.1 JSON Web Token , and
  • Section 6.3.2 Data Integrity Proofs .

6.3.1 JSON Web Token

JSON Web Token (JWT) [ RFC7519 ] is still a widely used means to express claims to be transferred between two parties. Providing a representation of the Verifiable Credentials Data Model for JWT allows existing systems and libraries to participate in the ecosystem described in Section 1.2 Ecosystem Overview . A JWT encodes a set of claims as a JSON object that is contained in a JSON Web Signature (JWS) [ RFC7515 ] or JWE [ RFC7516 ]. For this specification, the use of JWE is out of scope.

Relation to the Verifiable Credentials Data Model

This specification defines encoding rules of the Verifiable Credential Data Model onto JWT and JWS. It further defines processing rules how and when to make use of specific JWT-registered claim names and specific JWS-registered header parameter names to allow systems based on JWT to comply with this specification. If these specific claim names and header parameters are present, their respective counterpart in the standard verifiable credential and verifiable presentation MAY be omitted to avoid duplication.

JSON Web Token Extensions

This specification introduces two new registered claim names, which contain those parts of the standard verifiable credentials and verifiable presentations where no explicit encoding rules for JWT exist. These objects are enclosed in the JWT payload as follows:

  • vc : JSON object, which MUST be present in a JWT verifiable credential . The object contains the credential according to this specification.
  • vp : JSON object, which MUST be present in a JWT verifiable presentation . The object contains the presentation according to this specification.

JWT and JWS Considerations

Jwt encoding.

To encode a verifiable credential as a JWT, specific properties introduced by this specification MUST be either:

  • Encoded as standard JOSE header parameters, or
  • Encoded as registered JWT claim names, or
  • Contained in the JWS signature part.

If no explicit rule is specified, properties are encoded in the same way as with a standard credential , and are added to the vc claim of the JWT. As with all JWTs, the JWS-based signature of a verifiable credential represented in the JWT syntax is calculated against the literal JWT string value as presented across the wire, before any decoding or transformation rules are applied. The following paragraphs describe these encoding rules.

If a JWS is present, the digital signature refers either to the issuer of the verifiable credential , or in the case of a verifiable presentation , to the holder of the verifiable credential . The JWS proves that the iss of the JWT signed the contained JWT payload and therefore, the proof property can be omitted.

If no JWS is present, a proof property MUST be provided. The proof property can be used to represent a more complex proof, as may be necessary if the creator is different from the issuer , or a proof not based on digital signatures, such as Proof of Work. The issuer MAY include both a JWS and a proof property . For backward compatibility reasons, the issuer MUST use JWS to represent proofs based on a digital signature.

The following rules apply to JOSE headers in the context of this specification:

  • alg MUST be set for digital signatures. If only the proof property is needed for the chosen signature method (that is, if there is no choice of algorithm within that method), the alg header MUST be set to none .
  • kid MAY be used if there are multiple keys associated with the issuer of the JWT. The key discovery is out of the scope of this specification. For example, the kid can refer to a key in a DID document , or can be the identifier of a key inside a JWKS.
  • typ , if present, MUST be set to JWT .

For backward compatibility with JWT processors, the following registered JWT claim names MUST be used, instead of or in addition to, their respective standard verifiable credential counterparts:

  • exp MUST represent the expirationDate property , encoded as a UNIX timestamp ( NumericDate ).
  • iss MUST represent the issuer property of a verifiable credential or the holder property of a verifiable presentation.
  • nbf MUST represent issuanceDate , encoded as a UNIX timestamp ( NumericDate ).
  • jti MUST represent the id property of the verifiable credential or verifiable presentation .

In bearer credentials and presentations , sub will not be present.

  • aud MUST represent (i.e., identify) the intended audience of the verifiable presentation (i.e., the verifier intended by the presenting holder to receive and verify the verifiable presentation ).

Other JOSE header parameters and JWT claim names not specified herein can be used if their use is not explicitly discouraged. Additional verifiable credential claims MUST be added to the credentialSubject property of the JWT.

For more information about using JOSE header parameters and/or JWT claim names not specified herein, see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.

This version of the specification defines no JWT-specific encoding rules for the concepts outlined in Section Advanced Concepts (for example, refreshService , termsOfUse , and evidence ). These concepts can be encoded as they are without any transformation, and can be added to the vc JWT claim .

Implementers are warned that JWTs are not capable of encoding multiple subjects and are thus not capable of encoding a verifiable credential with more than one subject . JWTs might support multiple subjects in the future and implementers are advised to refer to the JSON Web Token Claim Registry for multi-subject JWT claim names or the Nested JSON Web Token specification.

JWT Decoding

To decode a JWT to a standard credential or presentation , the following transformation MUST be performed:

  • Create a JSON object.
  • Add the content from the vc or vp claim to the new JSON object.
  • Transform the remaining JWT specific headers and claims , and add the results to the new credential or presentation JSON object.

To transform the JWT specific headers and claims , the following MUST be done:

  • If exp is present, the UNIX timestamp MUST be converted to an [ XMLSCHEMA11-2 ] date-time , and MUST be used to set the value of the expirationDate property of credentialSubject of the new JSON object.
  • If iss is present, the value MUST be used to set the issuer property of the new credential JSON object or the holder property of the new presentation JSON object.
  • If nbf is present, the UNIX timestamp MUST be converted to an [ XMLSCHEMA11-2 ] date-time , and MUST be used to set the value of the issuanceDate property of the new JSON object.
  • If sub is present, the value MUST be used to set the value of the id property of credentialSubject of the new credential JSON object.
  • If jti is present, the value MUST be used to set the value of the id property of the new JSON object.

In the example above, the verifiable credential uses a proof based on JWS digital signatures, and the corresponding verification key can be obtained using the kid header parameter.

In the example above, vc does not contain the id property because the JWT encoding uses the jti attribute to represent a unique identifier. The sub attribute encodes the information represented by the id property of credentialSubject . The nonce has been added to stop a replay attack.

In the example above, the verifiable presentation uses a proof based on JWS digital signatures, and the corresponding verification key can be obtained using the kid header parameter.

In the example above, vp does not contain the id property because the JWT encoding uses the jti attribute to represent a unique identifier. verifiableCredential contains a string array of verifiable credentials using JWT compact serialization. The nonce has been added to stop a replay attack.

6.3.2 Data Integrity Proofs

This specification utilizes Linked Data to publish information on the Web using standards, such as URLs and JSON-LD, to identify subjects and their associated properties. When information is presented in this manner, other related information can be easily discovered and new information can be easily merged into the existing graph of knowledge. Linked Data is extensible in a decentralized way, greatly reducing barriers to large scale integration. The data model in this specification works well with Data Integrity and the associated Linked Data Cryptographic Suites which are designed to protect the data model as described by this specification.

Unlike the use of JSON Web Token, no extra pre- or post-processing is necessary. The Data Integrity Proofs format was designed to simply and easily protect verifiable credentials and verifiable presentations . Protecting a verifiable credential or verifiable presentation is as simple as passing a valid example in this specification to a Linked Data Signatures implementation and generating a digital signature.

For more information about the different qualities of the various syntax formats (for example, JSON+JWT, JSON-LD+JWT, or JSON-LD+LD-Proofs), see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.

7. Privacy Considerations

This section details the general privacy considerations and specific privacy implications of deploying the Verifiable Credentials Data Model into production environments.

7.1 Spectrum of Privacy

It is important to recognize there is a spectrum of privacy ranging from pseudonymous to strongly identified. Depending on the use case, people have different comfort levels about what information they are willing to provide and what information can be derived from what is provided.

For example, most people probably want to remain anonymous when purchasing alcohol because the regulatory check required is solely based on whether a person is above a specific age. Alternatively, for medical prescriptions written by a doctor for a patient, the pharmacy fulfilling the prescription is required to more strongly identify the medical professional and the patient. Therefore there is not one approach to privacy that works for all use cases. Privacy solutions are use case specific.

Even for those wanting to remain anonymous when purchasing alcohol, photo identification might still be required to provide appropriate assurance to the merchant. The merchant might not need to know your name or other details (other than that you are over a specific age), but in many cases just proof of age might still be insufficient to meet regulations.

The Verifiable Credentials Data Model strives to support the full privacy spectrum and does not take philosophical positions on the correct level of anonymity for any specific transaction. The following sections provide guidance for implementers who want to avoid specific scenarios that are hostile to privacy.

7.2 Personally Identifiable Information

Data associated with verifiable credentials stored in the credential.credentialSubject field is susceptible to privacy violations when shared with verifiers . Personally identifying data, such as a government-issued identifier, shipping address, and full name, can be easily used to determine, track, and correlate an entity . Even information that does not seem personally identifiable, such as the combination of a birthdate and a postal code, has very powerful correlation and de-anonymizing capabilities.

Implementers are strongly advised to warn holders when they share data with these kinds of characteristics. Issuers are strongly advised to provide privacy-protecting verifiable credentials when possible. For example, issuing ageOver verifiable credentials instead of date of birth verifiable credentials when a verifier wants to determine if an entity is over the age of 18.

Because a verifiable credential often contains personally identifiable information (PII), implementers are strongly advised to use mechanisms while storing and transporting verifiable credentials that protect the data from those who should not access it. Mechanisms that could be considered include Transport Layer Security (TLS) or other means of encrypting the data while in transit, as well as encryption or data access control mechanisms to protect the data in a verifiable credential while at rest.

7.3 Identifier-Based Correlation

Subjects of verifiable credentials are identified using the credential.credentialSubject.id field. The identifiers used to identify a subject create a greater risk of correlation when the identifiers are long-lived or used across more than one web domain.

Similarly, disclosing the credential identifier ( credential.id ) leads to situations where multiple verifiers , or an issuer and a verifier , can collude to correlate the holder . If holders want to reduce correlation, they should use verifiable credential schemes that allow hiding the identifier during verifiable presentation . Such schemes expect the holder to generate the identifier and might even allow hiding the identifier from the issuer , while still keeping the identifier embedded and signed in the verifiable credential .

If strong anti-correlation properties are a requirement in a verifiable credentials system, it is strongly advised that identifiers are either:

  • Bound to a single origin
  • Not used at all, but instead replaced by short-lived, single-use bearer tokens.

7.4 Signature-Based Correlation

The contents of verifiable credentials are secured using the credential.proof field. The properties in this field create a greater risk of correlation when the same values are used across more than one session or domain and the value does not change. Examples include the verificationMethod , created , proofPurpose , and jws fields.

If strong anti-correlation properties are required, it is advised that signature values and metadata are regenerated each time using technologies like third-party pairwise signatures, zero-knowledge proofs, or group signatures.

Even when using anti-correlation signatures, information might still be contained in a verifiable credential that defeats the anti-correlation properties of the cryptography used.

7.5 Long-Lived Identifier-Based Correlation

Verifiable credentials might contain long-lived identifiers that could be used to correlate individuals. These types of identifiers include subject identifiers, email addresses, government-issued identifiers, organization-issued identifiers, addresses, healthcare vitals, verifiable credential -specific JSON-LD contexts, and many other sorts of long-lived identifiers.

Organizations providing software to holders should strive to identify fields in verifiable credentials containing information that could be used to correlate individuals and warn holders when this information is shared.

7.6 Device Fingerprinting

There are mechanisms external to verifiable credentials that are used to track and correlate individuals on the Internet and the Web. Some of these mechanisms include Internet protocol (IP) address tracking, web browser fingerprinting, evercookies, advertising network trackers, mobile network position information, and in-application Global Positioning System (GPS) APIs. Using verifiable credentials cannot prevent the use of these other tracking technologies. Also, when these technologies are used in conjunction with verifiable credentials , new correlatable information could be discovered. For example, a birthday coupled with a GPS position can be used to strongly correlate an individual across multiple websites.

It is recommended that privacy-respecting systems prevent the use of these other tracking technologies when verifiable credentials are being used. In some cases, tracking technologies might need to be disabled on devices that transmit verifiable credentials on behalf of a holder .

7.7 Favor Abstract Claims

To enable recipients of verifiable credentials to use them in a variety of circumstances without revealing more PII than necessary for transactions, issuers should consider limiting the information published in a credential to a minimal set needed for the expected purposes. One way to avoid placing PII in a credential is to use an abstract property that meets the needs of verifiers without providing specific information about a subject .

For example, this document uses the ageOver property instead of a specific birthdate, which constitutes much stronger PII. If retailers in a specific market commonly require purchasers to be older than a certain age, an issuer trusted in that market might choose to offer a verifiable credential claiming that subjects have met that requirement instead of offering verifiable credentials containing claims about specific birthdates. This enables individual customers to make purchases without revealing specific PII.

7.8 The Principle of Data Minimization

Privacy violations occur when information divulged in one context leaks into another. Accepted best practice for preventing such violations is to limit the information requested, and received, to the absolute minimum necessary. This data minimization approach is required by regulation in multiple jurisdictions, including the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in the European Union.

With verifiable credentials , data minimization for issuers means limiting the content of a verifiable credential to the minimum required by potential verifiers for expected use. For verifiers , data minimization means limiting the scope of the information requested or required for accessing services.

For example, a driver's license containing a driver's ID number, height, weight, birthday, and home address is a credential containing more information than is necessary to establish that the person is above a certain age.

It is considered best practice for issuers to atomize information or use a signature scheme that allows for selective disclosure . For example, an issuer of driver's licenses could issue a verifiable credential containing every attribute that appears on a driver's license, as well as a set of verifiable credentials where every verifiable credential contains only a single attribute, such as a person's birthday. It could also issue more abstract verifiable credentials (for example, a verifiable credential containing only an ageOver attribute). One possible adaptation would be for issuers to provide secure HTTP endpoints for retrieving single-use bearer credentials that promote the pseudonymous usage of verifiable credentials . Implementers that find this impractical or unsafe, should consider using selective disclosure schemes that eliminate dependence on issuers at proving time and reduce temporal correlation risk from issuers .

Verifiers are urged to only request information that is absolutely necessary for a specific transaction to occur. This is important for at least two reasons. It:

  • Reduces the liability on the verifier for handling highly sensitive information that it does not need to.
  • Enhances the privacy of the individual by only asking for information required for a specific transaction.

While it is possible to practice the principle of minimum disclosure, it might be impossible to avoid the strong identification of an individual for specific use cases during a single session or over multiple sessions. The authors of this document cannot stress how difficult it is to meet this principle in real-world scenarios.

7.9 Bearer Credentials

A bearer credential is a privacy-enhancing piece of information, such as a concert ticket, which entitles the holder of the bearer credential to a specific resource without divulging sensitive information about the holder . Bearer credentials are often used in low-risk use cases where the sharing of the bearer credential is not a concern or would not result in large economic or reputational losses.

Verifiable credentials that are bearer credentials are made possible by not specifying the subject identifier, expressed using the id property , which is nested in the credentialSubject property . For example, the following verifiable credential is a bearer credential :

While bearer credentials can be privacy-enhancing, they must be carefully crafted so as not accidentally divulge more information than the holder of the bearer credential expects. For example, repeated use of the same bearer credential across multiple sites enables these sites to potentially collude to unduly track or correlate the holder . Likewise, information that might seem non-identifying, such as a birthdate and postal code, can be used to statistically identify an individual when used together in the same bearer credential or session.

Issuers of bearer credentials should ensure that the bearer credentials provide privacy-enhancing benefits that:

  • Are single-use, where possible.
  • Do not contain personally identifying information.
  • Are not unduly correlatable.

Holders should be warned by their software if bearer credentials containing sensitive information are issued or requested, or if there is a correlation risk when combining two or more bearer credentials across one or more sessions. While it might be impossible to detect all correlation risks, some might certainly be detectable.

Verifiers should not request bearer credentials that can be used to unduly correlate the holder .

7.10 Validity Checks

When processing verifiable credentials , verifiers are expected to perform many of the checks listed in Appendix A. Validation as well as a variety of specific business process checks. Validity checks might include checking:

  • The professional licensure status of the holder .
  • A date of license renewal or revocation.
  • The sub-qualifications of an individual.
  • If a relationship exists between the holder and the entity with whom the holder is attempting to interact.
  • The geolocation information associated with the holder .

The process of performing these checks might result in information leakage that leads to a privacy violation of the holder . For example, a simple operation such as checking a revocation list can notify the issuer that a specific business is likely interacting with the holder . This could enable issuers to collude and correlate individuals without their knowledge.

Issuers are urged to not use mechanisms, such as credential revocation lists that are unique per credential , during the verification process that could lead to privacy violations. Organizations providing software to holders should warn when credentials include information that could lead to privacy violations during the verification process. Verifiers should consider rejecting credentials that produce privacy violations or that enable bad privacy practices.

7.11 Storage Providers and Data Mining

When a holder receives a verifiable credential from an issuer , the verifiable credential needs to be stored somewhere (for example, in a credential repository). Holders are warned that the information in a verifiable credential is sensitive in nature and highly individualized, making it a high value target for data mining. Services that advertise free storage of verifiable credentials might in fact be mining personal data and selling it to organizations wanting to build individualized profiles on people and organizations.

Holders need to be aware of the terms of service for their credential repository, specifically the correlation and data mining protections in place for those who store their verifiable credentials with the service provider.

Some effective mitigations for data mining and profiling include using:

  • Service providers that do not sell your information to third parties.
  • Software that encrypts verifiable credentials such that a service provider cannot view the contents of the credential .
  • Software that stores verifiable credentials locally on a device that you control and that does not upload or analyze your information beyond your expectations.

7.12 Aggregation of Credentials

Holding two pieces of information about the same subject almost always reveals more about the subject than just the sum of the two pieces, even when the information is delivered through different channels. The aggregation of verifiable credentials is a privacy risk and all participants in the ecosystem need to be aware of the risks of data aggregation.

For example, if two bearer credentials , one for an email address and then one stating the holder is over the age of 21, are provided across multiple sessions, the verifier of the information now has a unique identifier as well as age-related information for that individual. It is now easy to create and build a profile for the holder such that more and more information is leaked over time. Aggregation of credentials can also be performed across multiple sites in collusion with each other, leading to privacy violations.

From a technological perspective, preventing aggregation of information is a very difficult privacy problem to address. While new cryptographic techniques, such as zero-knowledge proofs, are being proposed as solutions to the problem of aggregation and correlation, the existence of long-lived identifiers and browser tracking techniques defeats even the most modern cryptographic techniques.

The solution to the privacy implications of correlation or aggregation tends not to be technological in nature, but policy driven instead. Therefore, if a holder does not want information about them to be aggregated, they must express this in the verifiable presentations they transmit.

7.13 Usage Patterns

Despite the best efforts to assure privacy, actually using verifiable credentials can potentially lead to de-anonymization and a loss of privacy. This correlation can occur when:

  • The same verifiable credential is presented to the same verifier more than once. The verifier could infer that the holder is the same individual.
  • The same verifiable credential is presented to different verifiers , and either those verifiers collude or a third party has access to transaction records from both verifiers . An observant party could infer that the individual presenting the verifiable credential is the same person at both services. That is, the accounts are controlled by the same person.
  • A subject identifier of a credential refers to the same subject across multiple presentations or verifiers . Even when different credentials are presented, if the subject identifier is the same, verifiers (and those with access to verifier logs) could infer that the holder of the credential is the same person.
  • The underlying information in a credential can be used to identify an individual across services. In this case, using information from other sources (including information provided directly by the holder ), verifiers can use information inside the credential to correlate the individual with an existing profile. For example, if a holder presents credentials that include postal code, age, and gender, a verifier can potentially correlate the subject of that credential with an established profile. For more information, see [ DEMOGRAPHICS ].
  • Passing the identifier of a credential to a centralized revocation server. The centralized server can correlate the credential usage across interactions. For example, if a credential is used for proof of age in this manner, the centralized service could know everywhere that credential was presented (all liquor stores, bars, adult stores, lottery purchases, and so on).

In part, it is possible to mitigate this de-anonymization and loss of privacy by:

  • Using a globally-unique identifier as the subject for any given credential and never re-use that credential .
  • If the credential supports revocation, using a globally-distributed service for revocation.
  • Designing revocation APIs that do not depend on submitting the ID of the credential . For example, use a revocation list instead of a query.
  • Avoiding the association of personally identifiable information with any specific long-lived subject identifier.

It is understood that these mitigation techniques are not always practical or even compatible with necessary usage. Sometimes correlation is a requirement.

For example, in some prescription drug monitoring programs, usage monitoring is a requirement. Enforcement entities need to be able to confirm that individuals are not cheating the system to get multiple prescriptions for controlled substances. This statutory or regulatory need to correlate usage overrides individual privacy concerns.

Verifiable credentials will also be used to intentionally correlate individuals across services, for example, when using a common persona to log in to multiple services, so all activity on each of those services is intentionally linked to the same individual. This is not a privacy issue as long as each of those services uses the correlation in the expected manner.

Privacy risks of credential usage occur when unintended or unexpected correlation arises from the presentation of credentials .

7.14 Sharing Information with the Wrong Party

When a holder chooses to share information with a verifier , it might be the case that the verifier is acting in bad faith and requests information that could be used to harm the holder . For example, a verifier might ask for a bank account number, which could then be used with other information to defraud the holder or the bank.

Issuers should strive to tokenize as much information as possible such that if a holder accidentally transmits credentials to the wrong verifier , the situation is not catastrophic.

For example, instead of including a bank account number for the purpose of checking an individual's bank balance, provide a token that enables the verifier to check if the balance is above a certain amount. In this case, the bank could issue a verifiable credential containing a balance checking token to a holder . The holder would then include the verifiable credential in a verifiable presentation and bind the token to a credit checking agency using a digital signature. The verifier could then wrap the verifiable presentation in their digital signature, and hand it back to the issuer to dynamically check the account balance.

Using this approach, even if a holder shares the account balance token with the wrong party, an attacker cannot discover the bank account number, nor the exact value in the account. And given the validity period for the counter-signature, does not gain access to the token for more than a few minutes.

7.15 Frequency of Claim Issuance

As detailed in Section 7.13 Usage Patterns , usage patterns can be correlated into certain types of behavior. Part of this correlation is mitigated when a holder uses a verifiable credential without the knowledge of the issuer . Issuers can defeat this protection however, by making their verifiable credentials short lived and renewal automatic.

For example, an ageOver verifiable credential is useful for gaining access to a bar. If an issuer issues such a verifiable credential with a very short expiration date and an automatic renewal mechanism, then the issuer could possibly correlate the behavior of the holder in a way that negatively impacts the holder .

Organizations providing software to holders should warn them if they repeatedly use credentials with short lifespans, which could result in behavior correlation. Issuers should avoid issuing credentials in a way that enables them to correlate usage patterns.

7.16 Prefer Single-Use Credentials

An ideal privacy-respecting system would require only the information necessary for interaction with the verifier to be disclosed by the holder . The verifier would then record that the disclosure requirement was met and forget any sensitive information that was disclosed. In many cases, competing priorities, such as regulatory burden, prevent this ideal system from being employed. In other cases, long-lived identifiers prevent single use. The design of any verifiable credentials ecosystem, however, should strive to be as privacy-respecting as possible by preferring single-use verifiable credentials whenever possible.

Using single-use verifiable credentials provides several benefits. The first benefit is to verifiers who can be sure that the data in a verifiable credential is fresh. The second benefit is to holders , who know that if there are no long-lived identifiers in the verifiable credential , the verifiable credential itself cannot be used to track or correlate them online. Finally, there is nothing for attackers to steal, making the entire ecosystem safer to operate within.

7.17 Private Browsing

In an ideal private browsing scenario, no PII will be revealed. Because many credentials include PII, organizations providing software to holders should warn them about the possibility of revealing this information if they wish to use credentials and presentations while in private browsing mode. As each browser vendor handles private browsing differently, and some browsers might not have this feature at all, it is important for implementers to be aware of these differences and implement solutions accordingly.

7.18 Issuer Cooperation Impacts on Privacy

It cannot be overstated that verifiable credentials rely on a high degree of trust in issuers . The degree to which a holder might take advantage of possible privacy protections often depends strongly on the support an issuer provides for such features. In many cases, privacy protections which make use of zero-knowledge proofs, data minimization techniques, bearer credentials, abstract claims, and protections against signature-based correlation, require the issuer to actively support such capabilities and incorporate them into the verifiable credentials they issue.

It should also be noted that, in addition to a reliance on issuer participation to provide verifiable credential capabilities that help preserve holder and subject privacy, holders rely on issuers to not deliberately subvert privacy protections. For example, an issuer might sign verifiable credentials using a signature scheme that protects against signature-based correlation. This would protect the holder from being correlated by the signature value as it is shared among verifiers . However, if the issuer creates a unique key for each issued credential , it might be possible for the issuer to track presentations of the credential , regardless of a verifier 's inability to do so.

8. Security Considerations

There are a number of security considerations that issuers , holders , and verifiers should be aware of when processing data described by this specification. Ignoring or not understanding the implications of this section can result in security vulnerabilities.

While this section attempts to highlight a broad set of security considerations, it is not a complete list. Implementers are urged to seek the advice of security and cryptography professionals when implementing mission critical systems using the technology outlined in this specification.

8.1 Cryptography Suites and Libraries

Some aspects of the data model described in this specification can be protected through the use of cryptography. It is important for implementers to understand the cryptography suites and libraries used to create and process credentials and presentations . Implementing and auditing cryptography systems generally requires substantial experience. Effective red teaming can also help remove bias from security reviews.

Cryptography suites and libraries have a shelf life and eventually fall to new attacks and technology advances. Production quality systems need to take this into account and ensure mechanisms exist to easily and proactively upgrade expired or broken cryptography suites and libraries, and to invalidate and replace existing credentials . Regular monitoring is important to ensure the long term viability of systems processing credentials .

8.2 Content Integrity Protection

Verifiable credentials often contain URLs to data that resides outside of the verifiable credential itself. Linked content that exists outside a verifiable credential , such as images, JSON-LD Contexts, and other machine-readable data, are often not protected against tampering because the data resides outside of the protection of the proof on the verifiable credential . For example, the following highlighted links are not content-integrity protected but probably should be:

While this specification does not recommend any specific content integrity protection, document authors who want to ensure links to content are integrity protected are advised to use URL schemes that enforce content integrity. Two such schemes are the [ HASHLINK ] specification and the [ IPFS ]. The example below transforms the previous example and adds content integrity protection to the JSON-LD Contexts using the [ HASHLINK ] specification, and content integrity protection to the image by using an [ IPFS ] link.

It is debatable whether the JSON-LD Contexts above need protection because production implementations are expected to ship with static copies of important JSON-LD Contexts.

While the example above is one way to achieve content integrity protection, there are other solutions that might be better suited for certain applications. Implementers are urged to understand how links to external machine-readable content that are not content-integrity protected could result in successful attacks against their applications.

8.3 Unsigned Claims

This specification allows credentials to be produced that do not contain signatures or proofs of any kind. These types of credentials are often useful for intermediate storage, or self-asserted information, which is analogous to filling out a form on a web page. Implementers should be aware that these types of credentials are not verifiable because the authorship either is not known or cannot be trusted.

8.4 Token Binding

A verifier might need to ensure it is the intended recipient of a verifiable presentation and not the target of a man-in-the-middle attack . Approaches such as token binding [ RFC8471 ], which ties the request for a verifiable presentation to the response, can secure the protocol. Any unsecured protocol is susceptible to man-in-the-middle attacks.

8.5 Bundling Dependent Claims

It is considered best practice for issuers to atomize information in a credential , or use a signature scheme that allows for selective disclosure. In the case of atomization, if it is not done securely by the issuer , the holder might bundle together different credentials in a way that was not intended by the issuer .

For example, a university might issue two verifiable credentials to a person, each containing two properties , which must be taken together to to designate the "role" of that person in a given "department", such as "Staff Member" in the "Department of Computing", or "Post Graduate Student" in the "Department of Economics". If these verifiable credentials are atomized to put only one of these properties into each credential , then the university would issue four credentials to the person, each containing one of the following designations: "Staff Member", "Post Graduate Student", "Department of Computing", and "Department of Economics". The holder might then transfer the "Staff Member" and "Department of Economics" verifiable credentials to a verifier , which together would comprise a false claim .

8.6 Highly Dynamic Information

When verifiable credentials are issued for highly dynamic information, implementers should ensure the expiration times are set appropriately. Expiration periods longer than the timeframe where the verifiable credential is valid might create exploitable security vulnerabilities. Expiration periods shorter than the timeframe where the information expressed by the verifiable credential is valid creates a burden on holders and verifiers . It is therefore important to set validity periods for verifiable credentials that are appropriate to the use case and the expected lifetime for the information contained in the verifiable credential .

8.7 Device Theft and Impersonation

When verifiable credentials are stored on a device and that device is lost or stolen, it might be possible for an attacker to gain access to systems using the victim's verifiable credentials . Ways to mitigate this type of attack include:

  • Enabling password, pin, pattern, or biometric screen unlock protection on the device.
  • Enabling password, biometric, or multi-factor authentication for the credential repository .
  • Enabling password, biometric, or multi-factor authentication when accessing cryptographic keys.
  • Using a separate hardware-based signature device.
  • All or any combination of the above.

9. Accessibility Considerations

There are a number of accessibility considerations implementers should be aware of when processing data described in this specification. As with implementation of any web standard or protocol, ignoring accessibility issues makes this information unusable by a large subset of the population. It is important to follow accessibility guidelines and standards, such as [ WCAG21 ], to ensure that all people, regardless of ability, can make use of this data. This is especially important when establishing systems utilizing cryptography, which have historically created problems for assistive technologies.

This section details the general accessibility considerations to take into account when utilizing this data model.

9.1 Data First Approaches

Many physical credentials in use today, such as government identification cards, have poor accessibility characteristics, including, but not limited to, small print, reliance on small and high-resolution images, and no affordances for people with vision impairments.

When utilizing this data model to create verifiable credentials , it is suggested that data model designers use a data first approach. For example, given the choice of using data or a graphical image to depict a credential , designers should express every element of the image, such as the name of an institution or the professional credential , in a machine-readable way instead of relying on a viewer's interpretation of the image to convey this information. Using a data first approach is preferred because it provides the foundational elements of building different interfaces for people with varying abilities.

10. Internationalization Considerations

Implementers are advised to be aware of a number of internationalization considerations when publishing data described in this specification. As with any web standards or protocols implementation, ignoring internationalization makes it difficult for data to be produced and consumed across a disparate set of languages and societies, which limits the applicability of the specification and significantly diminishes its value as a standard.

Implementers are strongly advised to read the Strings on the Web: Language and Direction Metadata document [ STRING-META ], published by the W3C Internationalization Activity, which elaborates on the need to provide reliable metadata about text to support internationalization. For the latest information on internationalization considerations, implementers are also urged to read the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.

This section outlines general internationalization considerations to take into account when utilizing this data model and is intended to highlight specific parts of the Strings on the Web: Language and Direction Metadata document [ STRING-META ] that implementers might be interested in reading.

10.1 Language and Base Direction

Data publishers are strongly encouraged to read the section on Cross-Syntax Expression in the Strings on the Web: Language and Direction Metadata document [ STRING-META ] to ensure that the expression of language and base direction information is possible across multiple expression syntaxes, such as [ JSON-LD ], [ JSON ], and CBOR [ RFC7049 ].

The general design pattern is to use the following markup template when expressing a text string that is tagged with a language and, optionally, a specific base direction.

Using the design pattern above, the following example expresses the title of a book in the English language without specifying a text direction.

The next example uses a similar title expressed in the Arabic language with a base direction of right-to-left.

The text above would most likely be rendered incorrectly as left-to-right without the explicit expression of language and direction because many systems use the first character of a text string to determine text direction.

Implementers utilizing JSON-LD are strongly urged to extend the JSON-LD Context defining the internationalized property and use the Scoped Context feature of JSON-LD to alias the @value , @language , and @direction keywords to value , lang , and dir , respectively. An example of a JSON-LD Context snippet doing this is shown below.

10.2 Complex Language Markup

When multiple languages, base directions, and annotations are used in a single natural language string, more complex mechanisms are typically required. It is possible to use markup languages, such as HTML, to encode text with multiple languages and base directions. It is also possible to use the rdf:HTML datatype to encode such values accurately in JSON-LD.

Despite the ability to encode information as HTML, implementers are strongly discouraged from doing this because it:

  • Requires some version of an HTML processor, which increases the burden of processing language and base direction information.
  • Increases the security attack surface when utilizing this data model because blindly processing HTML could result in executing a script tag that an attacker injected at some point during the data production process.

If implementers feel they must use HTML, or other markup languages capable of containing executable scripts, to address a specific use case, they are advised to analyze how an attacker would use the markup to mount injection attacks against a consumer of the markup and then deploy mitigations against the identified attacks.

A. Validation

While this specification does not provide conformance criteria for the process of the validation of verifiable credentials or verifiable presentations , readers might be curious about how the information in this data model is expected to be utilized by verifiers during the process of validation . This section captures a selection of conversations held by the Working Group related to the expected usage of the data fields in this specification by verifiers .

A.1 Credential Subject

In the verifiable credentials presented by a holder , the value associated with the id property for each credentialSubject is expected to identify a subject to the verifier . If the holder is also the subject , then the verifier could authenticate the holder if they have public key metadata related to the holder . The verifier could then authenticate the holder using a signature generated by the holder contained in the verifiable presentation . The id property is optional. Verifiers could use other properties in a verifiable credential to uniquely identify a subject .

For information on how authentication and WebAuthn might work with verifiable credentials , see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.

The value associated with the issuer property is expected to identify an issuer that is known to and trusted by the verifier .

Relevant metadata about the issuer property is expected to be available to the verifier . For example, an issuer can publish information containing the public keys it uses to digitally sign verifiable credentials that it issued. This metadata is relevant when checking the proofs on the verifiable credentials .

A.3 Issuance Date

The issuanceDate is expected to be within an expected range for the verifier . For example, a verifier can check that the issuance date of a verifiable credential is not in the future.

A.4 Proofs (Signatures)

The cryptographic mechanism used to prove that the information in a verifiable credential or verifiable presentation was not tampered with is called a proof . There are many types of cryptographic proofs including, but not limited to, digital signatures, zero-knowledge proofs, Proofs of Work, and Proofs of Stake. In general, when verifying proofs, implementations are expected to ensure:

  • The proof is available in the form of a known proof suite.
  • All required proof suite properties are present.
  • The proof suite verification algorithm, when applied to the data, results in an acceptable proof.

Some proofs are digital signatures. In general, when verifying digital signatures, implementations are expected to ensure:

  • Acceptably recent metadata regarding the public key associated with the signature is available. For example, the metadata might include properties related to expiration, key owner, or key purpose.
  • The key is not suspended, revoked, or expired.
  • The cryptographic signature is expected to verify.
  • If the cryptographic suite expects a proofPurpose property , it is expected to exist and be a valid value, such as assertionMethod .

The digital signature provides a number of protections, other than tamper resistance, which are not immediately obvious. For example, a Linked Data Signature created property establishes a date and time before which the credential should not be considered verified . The verificationMethod property specifies, for example, the public key that can be used to verify the digital signature. Dereferencing a public key URL reveals information about the controller of the key, which can be checked against the issuer of the credential . The proofPurpose property clearly expresses the purpose for the proof and ensures this information is protected by the signature. A proof is typically attached to a verifiable presentation for authentication purposes and to a verifiable credential as a method of assertion.

A.5 Expiration

The expirationDate is expected to be within an expected range for the verifier . For example, a verifier can check that the expiration date of a verifiable credential is not in the past.

If the credentialStatus property is available, the status of a verifiable credential is expected to be evaluated by the verifier according to the credentialStatus type definition for the verifiable credential and the verifier's own status evaluation criteria. For example, a verifier can ensure the status of the verifiable credential is not "withdrawn for cause by the issuer ".

A.7 Fitness for Purpose

Fitness for purpose is about whether the custom properties in the verifiable credential are appropriate for the verifier's purpose. For example, if a verifier needs to determine whether a subject is older than 21 years of age, they might rely on a specific birthdate property , or on more abstract properties , such as ageOver .

The issuer is trusted by the verifier to make the claims at hand. For example, a franchised fast food restaurant location trusts the discount coupon claims made by the corporate headquarters of the franchise. Policy information expressed by the issuer in the verifiable credential should be respected by holders and verifiers unless they accept the liability of ignoring the policy.

B. Contexts, Types, and Credential Schemas

B.1 base context.

The base context, located at https://www.w3.org/2018/credentials/v1 with a SHA-256 digest of ab4ddd9a531758807a79a5b450510d61ae8d147eab966cc9a200c07095b0cdcc , can be used to implement a local cached copy. For convenience, the base context is also provided in this section.

B.2 Differences between Contexts, Types, and CredentialSchemas

The verifiable credential and verifiable presentation data models leverage a variety of underlying technologies including [ JSON-LD ] and [ JSON-SCHEMA-2018 ]. This section will provide a comparison of the @context , type , and credentialSchema properties, and cover some of the more specific use cases where it is possible to use these features of the data model.

The type property is used to uniquely identify the type of the verifiable credential in which it appears, i.e., to indicate which set of claims the verifiable credential contains. This property, and the value VerifiableCredential within the set of its values, are mandatory. Whilst it is good practice to include one additional value depicting the unique subtype of this verifiable credential , it is permitted to either omit or include additional type values in the array. Many verifiers will request a verifiable credential of a specific subtype, then omitting the subtype value could make it more difficult for verifiers to inform the holder which verifiable credential they require. When a verifiable credential has multiple subtypes, listing all of them in the type property is sensible. While the semantics are the same in both a [ JSON ] and [ JSON-LD ] representation, the usage of the type property in a [ JSON-LD ] representation of a verifiable credential is able to enforce the semantics of the verifiable credential better than a [ JSON ] representation of the same credential because the machine is able to check the semantics. With [ JSON-LD ], the technology is not only describing the categorization of the set of claims, the technology is also conveying the structure and semantics of the sub-graph of the properties in the graph. In [ JSON-LD ], this represents the type of the node in the graph which is why some [ JSON-LD ] representations of a verifiable credential will use the type property on many objects in the verifiable credential .

The primary purpose of the @context property, from a [ JSON-LD ] perspective, is to convey the meaning of the data and term definitions of the data in a verifiable credential , in a machine readable way. When encoding a pure [ JSON ] representation, the @context property remains mandatory and provides some basic support for global semantics. The @context property is used to map the globally unique URIs for properties in verifiable credentials and verifiable presentations into short-form alias names, making both the [ JSON ] and [ JSON-LD ] representations more human-friendly to read. From a [ JSON-LD ] perspective, this mapping also allows the data in a credential to be modeled in a network of machine-readable data, by enhancing how the data in the verifiable credential or verifiable presentation relates to a larger machine-readable data graph. This is useful for telling machines how to relate the meaning of data to other data in an ecosystem where parties are unable to coordinate. This property, with the first value in the set being https://www.w3.org/2018/credentials/v1 , is mandatory.

Since the @context property is used to map data to a graph data model, and the type property in [ JSON-LD ] is used to describe nodes within the graph, the type property becomes even more important when using the two properties in combination. For example, if the type property is not included within the resolved @context resource using [ JSON-LD ], it could lead to claims being dropped and/or their integrity no longer being protected during production and consumption of the verifiable credential . Alternatively, it could lead to errors being raised during production or consumption of a verifiable credential . This will depend on the design choices of the implementation and both paths are used in implementations today, so it's important to pay attention to these properties when using a [ JSON-LD ] representation of a verifiable credential or verifiable presentation .

The primary purpose of the credentialSchema property is to define the structure of the verifiable credential , and the datatypes for the values of each property that appears. A credentialSchema is useful for defining the contents and structure of a set of claims in a verifiable credential , whereas [ JSON-LD ] and a @context in a verifiable credential are best used only for conveying the semantics and term definitions of the data, and can be used to define the structure of the verifiable credential as well.

While it is possible to use some [ JSON-LD ] features to allude to the contents of the verifiable credential , it's not generally suggested to use @context to constrain the data types of the data model. For example, "@type": "@json" is useful for leaving the semantics open-ended and not strictly defined. This can be dangerous if the implementer is looking to constrain the data type of the claims in the credential , and is expected not to be used.

When the credentialSchema and @context properties are used in combination, both producers and consumers can be more confident about the expected contents and data types of the verifiable credential and verifiable presentation .

C. Subject-Holder Relationships

This section describes possible relationships between a subject and a holder and how the Verifiable Credentials Data Model expresses these relationships. The following diagram illustrates these relationships, with the subsequent sections describing how each of these relationships are handled in the data model.

C.1 Subject is the Holder

The most common relationship is when a subject is the holder . In this case, a verifier can easily deduce that a subject is the holder if the verifiable presentation is digitally signed by the holder and all contained verifiable credentials are about a subject that can be identified to be the same as the holder .

If only the credentialSubject is allowed to insert a verifiable credential into a verifiable presentation , the issuer can insert the nonTransferable property into the verifiable credential , as described below.

C.1.1 nonTransferable Property

The nonTransferable property indicates that a verifiable credential must only be encapsulated into a verifiable presentation whose proof was issued by the credentialSubject . A verifiable presentation that contains a verifiable credential containing the nonTransferable property , whose proof creator is not the credentialSubject , is invalid.

C.2 Credential Uniquely Identifies a Subject

In this case, the credentialSubject property might contain multiple properties , each providing an aspect of a description of the subject , which combine together to unambiguously identify the subject . Some use cases might not require the holder to be identified at all, such as checking to see if a doctor (the subject ) is board-certified. Other use cases might require the verifier to use out-of-band knowledge to determine the relationship between the subject and the holder .

The example above uniquely identifies the subject using the name, address, and birthdate of the individual.

C.3 Subject Passes the Verifiable Credential to a Holder

Usually verifiable credentials are presented to verifiers by the subject . However, in some cases, the subject might need to pass the whole or part of a verifiable credential to another holder . For example, if a patient (the subject ) is too ill to take a prescription (the verifiable credential ) to the pharmacist (the verifier ), a friend might take the prescription in to pick up the medication.

The data model allows for this by letting the subject issue a new verifiable credential and give it to the new holder , who can then present both verifiable credentials to the verifier . However, the content of this second verifiable credential is likely to be application-specific, so this specification cannot standardize the contents of this second verifiable credential . Nevertheless, a non-normative example is provided in Appendix C.5 Subject Passes a Verifiable Credential to Someone Else .

C.4 Holder Acts on Behalf of the Subject

The Verifiable Credentials Data Model supports the holder acting on behalf of the subject in at least the following ways. The:

  • Issuer can include the relationship between the holder and the subject in the credentialSubject property .
  • Issuer can express the relationship between the holder and the subject by issuing a new verifiable credential , which the holder utilizes.
  • Subject can express their relationship with the holder by issuing a new verifiable credential , which the holder utilizes.

The mechanisms listed above describe the relationship between the holder and the subject and helps the verifier decide whether the relationship is sufficiently expressed for a given use case.

The additional mechanisms the issuer or the verifier uses to verify the relationship between the subject and the holder are outside the scope of this specification.

In the example above, the issuer expresses the relationship between the child and the parent such that a verifier would most likely accept the credential if it is provided by the child or the parent.

In the example above, the issuer expresses the relationship between the child and the parent in a separate credential such that a verifier would most likely accept any of the child's credentials if they are provided by the child or if the credential above is provided with any of the child's credentials .

In the example above, the child expresses the relationship between the child and the parent in a separate credential such that a verifier would most likely accept any of the child's credentials if the credential above is provided.

Similarly, the strategies described in the examples above can be used for many other types of use cases, including power of attorney, pet ownership, and patient prescription pickup.

C.5 Subject Passes a Verifiable Credential to Someone Else

When a subject passes a verifiable credential to another holder , the subject might issue a new verifiable credential to the holder in which the:

  • Issuer is the subject .
  • Subject is the holder to whom the verifiable credential is being passed.
  • Claim contains the properties being passed on.

The holder can now create a verifiable presentation containing these two verifiable credentials so that the verifier can verify that the subject gave the original verifiable credential to the holder .

In the above example, a patient (the original subject ) passed a prescription (the original verifiable credential ) to a friend, and issued a new verifiable credential to the friend, in which the friend is the subject , the subject of the original verifiable credential is the issuer , and the credential is a copy of the original prescription.

C.6 Issuer Authorizes Holder

When an issuer wants to authorize a holder to possess a credential that describes a subject who is not the holder , and the holder has no known relationship with the subject , then the issuer might insert the relationship of the holder to itself into the subject's credential .

Verifiable credentials are not an authorization framework and therefore delegation is outside the scope of this specification. However, it is understood that verifiable credentials are likely to be used to build authorization and delegation systems. The following is one approach that might be appropriate for some use cases.

C.7 Holder Acts on Behalf of the Verifier, or has no Relationship with the Subject, Issuer, or Verifier

The Verifiable Credentials Data Model currently does not support either of these scenarios. It is for further study how they might be supported.

D. IANA Considerations

This section will be submitted to the Internet Engineering Steering Group (IESG) for review, approval, and registration with IANA in the "JSON Web Token Claims Registry".

  • Claim Name: "vc"
  • Claim Description: Verifiable Credential
  • Change Controller: W3C
  • Specification Document(s): Section 6.3.1.2: JSON Web Token Extensions of Verifiable Credentials Data Model 1.0
  • Claim Name: "vp"
  • Claim Description: Verifiable Presentation

E. Revision History

This section contains the substantive changes that have been made since the publication of v1.0 of this specification as a W3C Recommendation.

Changes since the Recommendation :

  • Add this revision history section.
  • Update previous normative references that pointed to RFC3339 for datetime details to now normatively reference the datetime details described in XMLSCHEMA11-2 which more accurately reflects the usage in examples and libraries.
  • Loosen the requirement to use URLs to use URIs in the id property of the credentialStatus and refreshService sections of the data model.
  • Loosen normative statements in the zero-knowledge proofs section to enable compliance of new zero-knowledge proof schemes, such as BBS+, that have been created since the v1.0 specification was published as a Recommendation.
  • Update all references to point to the latest version of the referenced specifications. Fix broken links to papers that have become unavailable to updated locations where the papers are available.
  • Increase accessibility of SVG diagrams.
  • Fix editorial bugs in a few examples related to issuer , issuanceDate , credentialStatus , dates, dead links, and minor syntax errors.
  • Move acknowledgements from Status of the Document section into the Acknowledgements appendix.

F. Acknowledgements

The Working Group thanks the following individuals not only for their contributions toward the content of this document, but also for yeoman's work in this standards community that drove changes, discussion, and consensus among a sea of varied opinions: Matt Stone, Gregg Kellogg, Ted Thibodeau Jr, Oliver Terbu, Joe Andrieu, David I. Lehn, Matthew Collier, and Adrian Gropper.

Work on this specification has been supported by the Rebooting the Web of Trust community facilitated by Christopher Allen, Shannon Appelcline, Kiara Robles, Brian Weller, Betty Dhamers, Kaliya Young, Manu Sporny, Drummond Reed, Joe Andrieu, Heather Vescent, Kim Hamilton Duffy, Samantha Chase, and Andrew Hughes. The participants in the Internet Identity Workshop, facilitated by Phil Windley, Kaliya Young, Doc Searls, and Heidi Nobantu Saul, also supported the refinement of this work through numerous working sessions designed to educate about, debate on, and improve this specification.

The Working Group also thanks our Chairs, Dan Burnett, Matt Stone, Brent Zundel, and Wayne Chang, as well as our W3C Staff Contacts, Kazuyuki Ashimura and Ivan Herman, for their expert management and steady guidance of the group through the W3C standardization process.

Portions of the work on this specification have been funded by the United States Department of Homeland Security's Science and Technology Directorate under contract HSHQDC-17-C-00019. The content of this specification does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred.

The Working Group would like to thank the following individuals for reviewing and providing feedback on the specification (in alphabetical order):

Christopher Allen, David Ammouial, Joe Andrieu, Bohdan Andriyiv, Ganesh Annan, Kazuyuki Ashimura, Tim Bouma, Pelle Braendgaard, Dan Brickley, Allen Brown, Jeff Burdges, Daniel Burnett, ckennedy422, David Chadwick, Chaoxinhu, Kim (Hamilton) Duffy, Lautaro Dragan, enuoCM, Ken Ebert, Eric Elliott, William Entriken, David Ezell, Nathan George, Reto Gmür, Ryan Grant, glauserr, Adrian Gropper, Joel Gustafson, Amy Guy, Lovesh Harchandani, Daniel Hardman, Dominique Hazael-Massieux, Jonathan Holt, David Hyland-Wood, Iso5786, Renato Iannella, Richard Ishida, Ian Jacobs, Anil John, Tom Jones, Rieks Joosten, Gregg Kellogg, Kevin, Eric Korb, David I. Lehn, Michael Lodder, Dave Longley, Christian Lundkvist, Jim Masloski, Pat McBennett, Adam C. Migus, Liam Missin, Alexander Mühle, Anthony Nadalin, Clare Nelson, Mircea Nistor, Grant Noble, Darrell O'Donnell, Nate Otto, Matt Peterson, Addison Phillips, Eric Prud'hommeaux, Liam Quin, Rajesh Rathnam, Drummond Reed, Yancy Ribbens, Justin Richer, Evstifeev Roman, RorschachRev, Steven Rowat, Pete Rowley, Markus Sabadello, Kristijan Sedlak, Tzviya Seigman, Reza Soltani, Manu Sporny, Orie Steele, Matt Stone, Oliver Terbu, Ted Thibodeau Jr, John Tibbetts, Mike Varley, Richard Varn, Heather Vescent, Christopher Lemmer Webber, Benjamin Young, Kaliya Young, Dmitri Zagidulin, and Brent Zundel.

G. References

G.1 normative references, g.2 informative references.

Referenced in:

  • § 1.4 Conformance (2)
  • § 1.1 What is a Verifiable Credential?
  • § 1.2 Ecosystem Overview (2) (3)
  • § 1.3 Use Cases and Requirements (2)
  • § 2. Terminology (2) (3) (4) (5) (6)
  • § 3. Core Data Model
  • § 3.1 Claims (2) (3) (4) (5) (6) (7) (8) (9)
  • § 3.2 Credentials (2) (3) (4) (5) (6) (7)
  • § 3.4 Concrete Lifecycle Example
  • § 4.2 Identifiers
  • § 4.4 Credential Subject (2)
  • § 4.10.1 Presentations Using Derived Credentials (2) (3) (4) (5) (6) (7)
  • § 5.8 Zero-Knowledge Proofs (2)
  • § 6.3.1 JSON Web Token (2)
  • § Relation to the Verifiable Credentials Data Model (2)
  • § JSON Web Token Extensions
  • § JWT Encoding (2) (3) (4)
  • § JWT Decoding (2) (3)
  • § 7.7 Favor Abstract Claims
  • § 8.5 Bundling Dependent Claims
  • § A.7 Fitness for Purpose (2)
  • § C.5 Subject Passes a Verifiable Credential to Someone Else
  • § Abstract (2)
  • § 1. Introduction (2) (3) (4) (5)
  • § 1.1 What is a Verifiable Credential? (2) (3) (4) (5) (6) (7) (8)
  • § 1.3 Use Cases and Requirements (2) (3) (4)
  • § 2. Terminology (2) (3)
  • § 3.2 Credentials (2) (3) (4) (5) (6) (7) (8) (9) (10)
  • § 3.3 Presentations
  • § 4.3 Types (2) (3) (4) (5)
  • § 4.5 Issuer
  • § 4.6 Issuance Date (2) (3)
  • § 4.7 Proofs (Signatures) (2)
  • § 4.8 Expiration (2)
  • § 4.9 Status (2) (3) (4) (5)
  • § 4.10 Presentations
  • § 5.1 Lifecycle Details
  • § 5.2 Trust Model (2) (3) (4) (5)
  • § 5.3 Extensibility
  • § 5.3.1 Semantic Interoperability (2)
  • § 5.4 Data Schemas (2)
  • § 5.5 Refreshing (2) (3)
  • § 5.6 Terms of Use
  • § 5.7 Evidence (2) (3) (4) (5)
  • § 5.8 Zero-Knowledge Proofs (2) (3)
  • § 5.9 Disputes (2) (3) (4) (5) (6) (7) (8) (9) (10)
  • § 6.3 Proof Formats
  • § JWT Encoding
  • § JWT Decoding (2) (3) (4)
  • § 7.3 Identifier-Based Correlation
  • § 7.7 Favor Abstract Claims (2)
  • § 7.8 The Principle of Data Minimization
  • § 7.10 Validity Checks (2) (3) (4)
  • § 7.11 Storage Providers and Data Mining (2) (3)
  • § 7.12 Aggregation of Credentials
  • § 7.13 Usage Patterns (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17)
  • § 7.14 Sharing Information with the Wrong Party
  • § 7.15 Frequency of Claim Issuance (2)
  • § 7.17 Private Browsing (2)
  • § 7.18 Issuer Cooperation Impacts on Privacy (2)
  • § 8.1 Cryptography Suites and Libraries (2) (3)
  • § 8.3 Unsigned Claims (2) (3)
  • § 8.5 Bundling Dependent Claims (2) (3) (4)
  • § 8.7 Device Theft and Impersonation
  • § 9.1 Data First Approaches (2) (3)
  • § A.4 Proofs (Signatures) (2)
  • § B.2 Differences between Contexts, Types, and CredentialSchemas (2)
  • § C.4 Holder Acts on Behalf of the Subject (2) (3) (4) (5) (6) (7) (8)
  • § C.6 Issuer Authorizes Holder (2)
  • § 1. Introduction (2) (3)
  • § 1.1 What is a Verifiable Credential? (2) (3) (4) (5) (6)
  • § 1.2 Ecosystem Overview (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
  • § 1.3 Use Cases and Requirements (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20)
  • § 1.4 Conformance (2) (3) (4)
  • § 2. Terminology (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22)
  • § 3.2 Credentials (2) (3) (4) (5) (6)
  • § 3.3 Presentations (2) (3) (4) (5) (6)
  • § 3.4 Concrete Lifecycle Example (2) (3) (4) (5) (6) (7) (8) (9)
  • § 4.1 Contexts (2) (3) (4) (5)
  • § 4.2 Identifiers (2) (3) (4) (5) (6) (7) (8)
  • § 4.3 Types (2) (3) (4) (5) (6)
  • § 4.4 Credential Subject (2) (3) (4)
  • § 4.5 Issuer (2)
  • § 4.9 Status (2)
  • § 4.10 Presentations (2) (3) (4) (5)
  • § 4.10.1 Presentations Using Derived Credentials (2) (3) (4) (5)
  • § 5. Advanced Concepts
  • § 5.1 Lifecycle Details (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16)
  • § 5.2 Trust Model (2) (3) (4)
  • § 5.3 Extensibility (2) (3) (4) (5) (6) (7)
  • § 5.4 Data Schemas (2) (3) (4) (5) (6)
  • § 5.5 Refreshing (2) (3) (4) (5) (6) (7) (8) (9)
  • § 5.6 Terms of Use (2) (3) (4) (5) (6) (7) (8) (9)
  • § 5.7 Evidence (2) (3) (4)
  • § 5.8 Zero-Knowledge Proofs (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25)
  • § 5.9 Disputes (2)
  • § 5.10 Authorization
  • § 6. Syntaxes
  • § 6.2 JSON-LD
  • § 6.2.1 Syntactic Sugar (2) (3)
  • § Relation to the Verifiable Credentials Data Model
  • § JSON Web Token Extensions (2)
  • § JWT Encoding (2) (3) (4) (5) (6) (7) (8)
  • § JWT Decoding (2)
  • § 6.3.2 Data Integrity Proofs (2)
  • § 7.2 Personally Identifiable Information (2) (3) (4) (5) (6) (7)
  • § 7.3 Identifier-Based Correlation (2) (3) (4)
  • § 7.4 Signature-Based Correlation (2)
  • § 7.5 Long-Lived Identifier-Based Correlation (2) (3)
  • § 7.6 Device Fingerprinting (2) (3) (4) (5)
  • § 7.7 Favor Abstract Claims (2) (3)
  • § 7.8 The Principle of Data Minimization (2) (3) (4) (5) (6) (7) (8)
  • § 7.9 Bearer Credentials (2)
  • § 7.10 Validity Checks
  • § 7.11 Storage Providers and Data Mining (2) (3) (4) (5) (6) (7)
  • § 7.13 Usage Patterns (2) (3) (4) (5)
  • § 7.14 Sharing Information with the Wrong Party (2)
  • § 7.15 Frequency of Claim Issuance (2) (3) (4)
  • § 7.16 Prefer Single-Use Credentials (2) (3) (4) (5) (6)
  • § 7.18 Issuer Cooperation Impacts on Privacy (2) (3) (4)
  • § 8.2 Content Integrity Protection (2) (3) (4)
  • § 8.5 Bundling Dependent Claims (2) (3)
  • § 8.6 Highly Dynamic Information (2) (3) (4) (5)
  • § 8.7 Device Theft and Impersonation (2)
  • § 9.1 Data First Approaches
  • § A. Validation
  • § A.1 Credential Subject (2) (3)
  • § A.2 Issuer (2)
  • § A.3 Issuance Date
  • § A.5 Expiration
  • § A.6 Status (2) (3)
  • § B.2 Differences between Contexts, Types, and CredentialSchemas (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23)
  • § C.1 Subject is the Holder (2) (3)
  • § C.1.1 nonTransferable Property (2)
  • § C.3 Subject Passes the Verifiable Credential to a Holder (2) (3) (4) (5) (6) (7)
  • § C.4 Holder Acts on Behalf of the Subject (2)
  • § C.5 Subject Passes a Verifiable Credential to Someone Else (2) (3) (4) (5) (6) (7) (8)
  • § 2. Terminology
  • § 1.2 Ecosystem Overview (2) (3) (4)
  • § 2. Terminology (2) (5) (6)
  • § 3.2 Credentials (2)
  • § 3.3 Presentations (2)
  • § 5.2 Trust Model (2)
  • § 5.9 Disputes (2) (3)
  • § 7.2 Personally Identifiable Information (2)
  • § 3.1 Claims (2)
  • § 3.2 Credentials (2) (3) (4)
  • § 3.3 Presentations (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
  • § 1.2 Ecosystem Overview (2) (3) (4) (5)
  • § 1.3 Use Cases and Requirements (2) (3) (4) (5) (6) (7) (8) (9) (10)
  • § 1.4 Conformance
  • § 3.2 Credentials
  • § 4.10.1 Presentations Using Derived Credentials (2)
  • § 5.1 Lifecycle Details (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13)
  • § 5.2 Trust Model (2) (3)
  • § 5.5 Refreshing (2) (3) (4) (5) (6) (7)
  • § 5.6 Terms of Use (2) (3) (4) (5) (6) (7) (8)
  • § 5.8 Zero-Knowledge Proofs (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14)
  • § 6.2.1 Syntactic Sugar
  • § JWT Encoding (2)
  • § 7.2 Personally Identifiable Information
  • § 7.3 Identifier-Based Correlation (2) (3)
  • § 7.5 Long-Lived Identifier-Based Correlation (2)
  • § 7.6 Device Fingerprinting
  • § 7.9 Bearer Credentials (2) (3) (4) (5) (6)
  • § 7.10 Validity Checks (2) (3) (4) (5) (6) (7)
  • § 7.12 Aggregation of Credentials (2) (3)
  • § 7.13 Usage Patterns (2) (3) (4)
  • § 7.14 Sharing Information with the Wrong Party (2) (3) (4) (5) (6) (7)
  • § 7.16 Prefer Single-Use Credentials (2)
  • § 7.17 Private Browsing
  • § 8. Security Considerations
  • § 8.5 Bundling Dependent Claims (2)
  • § 8.6 Highly Dynamic Information
  • § A.1 Credential Subject (2) (3) (4) (5) (6)
  • § A.7 Fitness for Purpose
  • § C. Subject-Holder Relationships
  • § C.1 Subject is the Holder (2) (3) (4)
  • § C.2 Credential Uniquely Identifies a Subject (2)
  • § C.3 Subject Passes the Verifiable Credential to a Holder (2)
  • § C.5 Subject Passes a Verifiable Credential to Someone Else (2) (3) (4) (5)
  • § C.6 Issuer Authorizes Holder (2) (3) (4)
  • § 5.2 Trust Model
  • § 1.2 Ecosystem Overview
  • § 2. Terminology (2) (3) (4)
  • § 5.1 Lifecycle Details (2) (3) (4)
  • § 5.2 Trust Model (2) (3) (4) (5) (6) (7)
  • § 5.4 Data Schemas (2) (3)
  • § 5.5 Refreshing (2) (3) (4) (5)
  • § 5.6 Terms of Use (2) (3) (4)
  • § 5.8 Zero-Knowledge Proofs (2) (3) (4) (5)
  • § 5.9 Disputes (2) (3) (4)
  • § 7.3 Identifier-Based Correlation (2)
  • § 7.8 The Principle of Data Minimization (2) (3) (4) (5) (6)
  • § 7.9 Bearer Credentials
  • § 7.10 Validity Checks (2) (3)
  • § 7.11 Storage Providers and Data Mining
  • § 7.15 Frequency of Claim Issuance (2) (3) (4) (5)
  • § 7.18 Issuer Cooperation Impacts on Privacy (2) (3) (4) (5) (6) (7) (8)
  • § A.6 Status
  • § C.1 Subject is the Holder
  • § C.4 Holder Acts on Behalf of the Subject (2) (3) (4) (5)
  • § C.5 Subject Passes a Verifiable Credential to Someone Else (2)
  • § 4.3 Types (2)
  • § 4.10 Presentations (2) (3) (4) (5) (6)
  • § 5.6 Terms of Use (2)
  • § 5.8 Zero-Knowledge Proofs
  • § 7.13 Usage Patterns
  • § 7.18 Issuer Cooperation Impacts on Privacy
  • § 8.1 Cryptography Suites and Libraries
  • § 1. Introduction (2)
  • § 1.1 What is a Verifiable Credential? (2) (3) (4)
  • § 1.2 Ecosystem Overview (2)
  • § 2. Terminology (2) (3) (4) (5) (6) (7)
  • § 3.3 Presentations (2) (3) (4) (5) (6) (7) (8)
  • § 3.4 Concrete Lifecycle Example (2) (3) (4) (5)
  • § 4.1 Contexts (2) (3)
  • § 4.7 Proofs (Signatures)
  • § 4.10 Presentations (2) (3) (4)
  • § 5.5 Refreshing (2)
  • § 5.6 Terms of Use (2) (3) (4) (5)
  • § 5.8 Zero-Knowledge Proofs (2) (3) (4) (5) (6) (7) (8) (9) (10)
  • § JWT Decoding
  • § 8.4 Token Binding (2)
  • § A.1 Credential Subject
  • § B.2 Differences between Contexts, Types, and CredentialSchemas (2) (3) (4) (5)
  • § C.1 Subject is the Holder (2)
  • § 7.8 The Principle of Data Minimization (2)
  • § 1.1 What is a Verifiable Credential? (2)
  • § 2. Terminology (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
  • § 3.1 Claims (2) (3)
  • § 4.2 Identifiers (2)
  • § 4.3 Types
  • § 4.4 Credential Subject (2) (3) (4) (5) (6)
  • § 4.10 Presentations (2)
  • § 4.10.1 Presentations Using Derived Credentials (2) (3) (4)
  • § 5.1 Lifecycle Details (2)
  • § 5.7 Evidence (2)
  • § 5.10 Authorization (2)
  • § 6.3.2 Data Integrity Proofs
  • § 7.5 Long-Lived Identifier-Based Correlation
  • § 7.12 Aggregation of Credentials (2)
  • § 7.13 Usage Patterns (2) (3) (4) (5) (6)
  • § C.2 Credential Uniquely Identifies a Subject (2) (3) (4) (5)
  • § C.3 Subject Passes the Verifiable Credential to a Holder (2) (3) (4)
  • § C.4 Holder Acts on Behalf of the Subject (2) (3) (4) (5) (6)
  • § C.6 Issuer Authorizes Holder (2) (3)
  • § 6. Syntaxes (2)
  • § A. Validation (2)
  • § 1. Introduction
  • § 1.3 Use Cases and Requirements (2) (3) (4) (5) (6)
  • § 2. Terminology (2)
  • § 3.4 Concrete Lifecycle Example (2)
  • § 4.10 Presentations (2) (3)
  • § 4.10.1 Presentations Using Derived Credentials
  • § 8.3 Unsigned Claims
  • § 2. Terminology (3)
  • § 3.4 Concrete Lifecycle Example (2) (3) (4)
  • § 5.1 Lifecycle Details (2) (3) (4) (5) (6) (7)
  • § 5.2 Trust Model (2) (3) (4) (5) (6)
  • § 5.5 Refreshing (2) (3) (4) (5) (6)
  • § 5.8 Zero-Knowledge Proofs (2) (3) (4) (5) (6) (7) (8)
  • § 5.9 Disputes
  • § 7.8 The Principle of Data Minimization (2) (3) (4)
  • § 7.10 Validity Checks (2)
  • § 7.13 Usage Patterns (2) (3) (4) (5) (6) (7) (8) (9) (10)
  • § 7.14 Sharing Information with the Wrong Party (2) (3) (4) (5) (6)
  • § 7.16 Prefer Single-Use Credentials (2) (3)
  • § 8.4 Token Binding
  • § A.1 Credential Subject (2) (3) (4)
  • § A.3 Issuance Date (2)
  • § A.5 Expiration (2)
  • § A.7 Fitness for Purpose (2) (3) (4)
  • § C.2 Credential Uniquely Identifies a Subject
  • § C.3 Subject Passes the Verifiable Credential to a Holder (2) (3)
  • § 4.1 Contexts (2) (3) (4) (5) (6) (7) (8) (9) (10)
  • § 4.2 Identifiers (2) (3)
  • § 4.3 Types (2) (3) (4)
  • § 4.9 Status
  • § 5.4 Data Schemas
  • § 5.5 Refreshing
  • § E. Revision History
  • § 4.1 Contexts (2) (3) (4) (5) (6) (7) (8)
  • § 4.2 Identifiers (2) (3) (4) (5) (6) (7) (8) (9)
  • § 4.4 Credential Subject (2) (3)
  • § 4.5 Issuer (2) (3) (4)
  • § 4.6 Issuance Date (2) (3) (4) (5) (6) (7) (8)
  • § 4.7 Proofs (Signatures) (2) (3) (4)
  • § 4.8 Expiration (2) (3) (4) (5)
  • § 4.9 Status (2) (3) (4)
  • § 4.10 Presentations (2) (3) (4) (5) (6) (7)
  • § 5.5 Refreshing (2) (3) (4)
  • § 5.6 Terms of Use (2) (3)
  • § 5.7 Evidence (2) (3) (4) (5) (6) (7) (8)
  • § 5.8 Zero-Knowledge Proofs (2) (3) (4)
  • § 6. Syntaxes (2) (3) (4)
  • § JWT Encoding (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12)
  • § JWT Decoding (2) (3) (4) (5) (6) (7) (8) (9)
  • § 7.4 Signature-Based Correlation
  • § 10.1 Language and Base Direction (2)
  • § A.4 Proofs (Signatures) (2) (3) (4) (5) (6)
  • § A.7 Fitness for Purpose (2) (3)
  • § C.4 Holder Acts on Behalf of the Subject
  • § 4.3 Types (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15)
  • § 7.9 Bearer Credentials (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)

Healthy People 2030

Building a healthier future for all

Healthy People 2030 sets data-driven national objectives to improve health and well-being over the next decade.

Healthy People 2030 includes 359 core — or measurable — objectives as well as developmental and research objectives.

Learn more about the types of objectives .

Social Determinants of Health

Social determinants of health have a major impact on people's health and well-being — and they're a key focus of Healthy People 2030.

SDOH icon

Leading Health Indicators

Leading Health Indicators (LHIs) are a small subset of high-priority objectives selected to drive action toward improving health and well-being.

A group of people wearing workout clothes laugh together while standing outside.

Health Disparities Data Feature

Healthy People 2030’s disparities data feature allows you to track changes in disparities to see where we’re improving as a nation — and where we need to increase our efforts.

A computer icon on a navy blue background with an arrow pointing up with a person icon facing the screen.

Evidence-Based Resources

Healthy People 2030 provides hundreds of evidence-based resources to help you address public health priorities.

woman seated in a bookstore reading on her laptop

Healthy People in Action Spotlight

Registration is now open for the next healthy people 2030 webinar, air quality matters: improving health and lung function with healthy people 2030 objectives.

Healthy People 2030

The Office of Disease Prevention and Health Promotion (ODPHP) is pleased to announce its next Healthy People 2030 webinar: Air Quality Matters: Improving Health and Lung Function with Healthy People 2030 Objectives. This webinar will take place on Wednesday, June 12 from 2:00 to 3:00 pm ET. Continuing Education Credits (CEs) are available.

The Office of Disease Prevention and Health Promotion (ODPHP) cannot attest to the accuracy of a non-federal website.

Linking to a non-federal website does not constitute an endorsement by ODPHP or any of its employees of the sponsors or the information and products presented on the website.

You will be subject to the destination website's privacy policy when you follow the link.

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Health Topics
  • Brochures and Fact Sheets
  • Help for Mental Illnesses
  • Clinical Trials

Attention-Deficit/Hyperactivity Disorder

What is adhd.

Attention-deficit/hyperactivity disorder (ADHD) is marked by an ongoing pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development. People with ADHD experience an ongoing pattern of the following types of symptoms:

  • Inattention means a person may have difficulty staying on task, sustaining focus, and staying organized, and these problems are not due to defiance or lack of comprehension.
  • Hyperactivity means a person may seem to move about constantly, including in situations when it is not appropriate, or excessively fidgets, taps, or talks. In adults, hyperactivity may mean extreme restlessness or talking too much.
  • Impulsivity means a person may act without thinking or have difficulty with self-control. Impulsivity could also include a desire for immediate rewards or the inability to delay gratification. An impulsive person may interrupt others or make important decisions without considering long-term consequences.

What are the signs and symptoms of ADHD?

Some people with ADHD mainly have symptoms of inattention. Others mostly have symptoms of hyperactivity-impulsivity. Some people have both types of symptoms.

Many people experience some inattention, unfocused motor activity, and impulsivity, but for people with ADHD, these behaviors:

  • Are more severe
  • Occur more often
  • Interfere with or reduce the quality of how they function socially, at school, or in a job

Inattention

People with symptoms of inattention may often:

  • Overlook or miss details and make seemingly careless mistakes in schoolwork, at work, or during other activities
  • Have difficulty sustaining attention during play or tasks, such as conversations, lectures, or lengthy reading
  • Not seem to listen when spoken to directly
  • Find it hard to follow through on instructions or finish schoolwork, chores, or duties in the workplace, or may start tasks but lose focus and get easily sidetracked
  • Have difficulty organizing tasks and activities, doing tasks in sequence, keeping materials and belongings in order, managing time, and meeting deadlines
  • Avoid tasks that require sustained mental effort, such as homework, or for teens and older adults, preparing reports, completing forms, or reviewing lengthy papers
  • Lose things necessary for tasks or activities, such as school supplies, pencils, books, tools, wallets, keys, paperwork, eyeglasses, and cell phones
  • Be easily distracted by unrelated thoughts or stimuli
  • Be forgetful in daily activities, such as chores, errands, returning calls, and keeping appointments

Hyperactivity-impulsivity

People with symptoms of hyperactivity-impulsivity may often:

  • Fidget and squirm while seated
  • Leave their seats in situations when staying seated is expected, such as in the classroom or the office
  • Run, dash around, or climb at inappropriate times or, in teens and adults, often feel restless
  • Be unable to play or engage in hobbies quietly
  • Be constantly in motion or on the go, or act as if driven by a motor
  • Talk excessively
  • Answer questions before they are fully asked, finish other people’s sentences, or speak without waiting for a turn in a conversation
  • Have difficulty waiting one’s turn
  • Interrupt or intrude on others, for example in conversations, games, or activities

Primary care providers sometimes diagnose and treat ADHD. They may also refer individuals to a mental health professional, such as a psychiatrist or clinical psychologist, who can do a thorough evaluation and make an ADHD diagnosis.

For a person to receive a diagnosis of ADHD, the symptoms of inattention and/or hyperactivity-impulsivity must be chronic or long-lasting, impair the person’s functioning, and cause the person to fall behind typical development for their age. Stress, sleep disorders, anxiety, depression, and other physical conditions or illnesses can cause similar symptoms to those of ADHD. Therefore, a thorough evaluation is necessary to determine the cause of the symptoms.

Most children with ADHD receive a diagnosis during the elementary school years. For an adolescent or adult to receive a diagnosis of ADHD, the symptoms need to have been present before age 12.

ADHD symptoms can appear as early as between the ages of 3 and 6 and can continue through adolescence and adulthood. Symptoms of ADHD can be mistaken for emotional or disciplinary problems or missed entirely in children who primarily have symptoms of inattention, leading to a delay in diagnosis. Adults with undiagnosed ADHD may have a history of poor academic performance, problems at work, or difficult or failed relationships.

ADHD symptoms can change over time as a person ages. In young children with ADHD, hyperactivity-impulsivity is the most predominant symptom. As a child reaches elementary school, the symptom of inattention may become more prominent and cause the child to struggle academically. In adolescence, hyperactivity seems to lessen and symptoms may more likely include feelings of restlessness or fidgeting, but inattention and impulsivity may remain. Many adolescents with ADHD also struggle with relationships and antisocial behaviors. Inattention, restlessness, and impulsivity tend to persist into adulthood.

What are the risk factors of ADHD?

Researchers are not sure what causes ADHD, although many studies suggest that genes play a large role. Like many other disorders, ADHD probably results from a combination of factors. In addition to genetics, researchers are looking at possible environmental factors that might raise the risk of developing ADHD and are studying how brain injuries, nutrition, and social environments might play a role in ADHD.

ADHD is more common in males than females, and females with ADHD are more likely to primarily have inattention symptoms. People with ADHD often have other conditions, such as learning disabilities, anxiety disorder, conduct disorder, depression, and substance use disorder.

How is ADHD treated?

While there is no cure for ADHD, currently available treatments may reduce symptoms and improve functioning. Treatments include medication, psychotherapy, education or training, or a combination of treatments.

For many people, ADHD medications reduce hyperactivity and impulsivity and improve their ability to focus, work, and learn. Sometimes several different medications or dosages must be tried before finding the right one that works for a particular person. Anyone taking medications must be monitored closely by their prescribing doctor.

Stimulants. The most common type of medication used for treating ADHD is called a “stimulant.” Although it may seem unusual to treat ADHD with a medication that is considered a stimulant, it works by increasing the brain chemicals dopamine and norepinephrine, which play essential roles in thinking and attention.

Under medical supervision, stimulant medications are considered safe. However, like all medications, they can have side effects, especially when misused or taken in excess of the prescribed dose, and require an individual’s health care provider to monitor how they may be reacting to the medication.

Non-stimulants. A few other ADHD medications are non-stimulants. These medications take longer to start working than stimulants, but can also improve focus, attention, and impulsivity in a person with ADHD. Doctors may prescribe a non-stimulant: when a person has bothersome side effects from stimulants, when a stimulant was not effective, or in combination with a stimulant to increase effectiveness.

Although not approved by the U.S. Food and Drug Administration (FDA) specifically for the treatment of ADHD, some antidepressants are used alone or in combination with a stimulant to treat ADHD. Antidepressants may help all of the symptoms of ADHD and can be prescribed if a patient has bothersome side effects from stimulants. Antidepressants can be helpful in combination with stimulants if a patient also has another condition, such as an anxiety disorder, depression, or another mood disorder. Non-stimulant ADHD medications and antidepressants may also have side effects.

Doctors and patients can work together to find the best medication, dose, or medication combination. To find the latest information about medications, talk to a health care provider and visit the FDA website  .

Psychotherapy and psychosocial interventions

Several specific psychosocial interventions have been shown to help individuals with ADHD and their families manage symptoms and improve everyday functioning.

For school-age children, frustration, blame, and anger may have built up within a family before a child is diagnosed. Parents and children may need specialized help to overcome negative feelings. Mental health professionals can educate parents about ADHD and how it affects a family. They also will help the child and his or her parents develop new skills, attitudes, and ways of relating to each other.

All types of therapy for children and teens with ADHD require parents to play an active role. Psychotherapy that includes only individual treatment sessions with the child (without parent involvement) is not effective for managing ADHD symptoms and behavior. This type of treatment is more likely to be effective for treating symptoms of anxiety or depression that may occur along with ADHD.

Behavioral therapy is a type of psychotherapy that aims to help a person change their behavior. It might involve practical assistance, such as help organizing tasks or completing schoolwork, or working through emotionally difficult events. Behavioral therapy also teaches a person how to:

  • Monitor their own behavior
  • Give oneself praise or rewards for acting in a desired way, such as controlling anger or thinking before acting

Parents, teachers, and family members also can give feedback on certain behaviors and help establish clear rules, chore lists, and structured routines to help a person control their behavior. Therapists may also teach children social skills, such as how to wait their turn, share toys, ask for help, or respond to teasing. Learning to read facial expressions and the tone of voice in others, and how to respond appropriately can also be part of social skills training.

Cognitive behavioral therapy helps a person learn how to be aware and accepting of one’s own thoughts and feelings to improve focus and concentration. The therapist also encourages the person with ADHD to adjust to the life changes that come with treatment, such as thinking before acting, or resisting the urge to take unnecessary risks.

Family and marital therapy can help family members and spouses find productive ways to handle disruptive behaviors, encourage behavior changes, and improve interactions with the person with ADHD.

Parenting skills training (behavioral parent management training) teaches parents skills for encouraging and rewarding positive behaviors in their children. Parents are taught to use a system of rewards and consequences to change a child’s behavior, to give immediate and positive feedback for behaviors they want to encourage, and to ignore or redirect behaviors they want to discourage.

Specific behavioral classroom management interventions and/or academic accommodations for children and teens have been shown to be effective for managing symptoms and improving functioning at school and with peers. Interventions may include behavior management plans or teaching organizational or study skills. Accommodations may include preferential seating in the classroom, reduced classwork load, or extended time on tests and exams. The school may provide accommodations through what is called a 504 Plan or, for children who qualify for special education services, an Individualized Education Plan (IEP). 

To learn more about the Individuals with Disabilities Education Act (IDEA), visit the  U.S. Department of Education’s IDEA website  .

Stress management techniques can benefit parents of children with ADHD by increasing their ability to deal with frustration so that they can respond calmly to their child’s behavior.

Support groups can help parents and families connect with others who have similar problems and concerns. Groups often meet regularly to share frustrations and successes, to exchange information about recommended specialists and strategies, and to talk with experts.

The National Resource Center on ADHD, a program of Children and Adults with Attention-Deficit/Hyperactivity Disorder (CHADD®) supported by the Centers for Disease Control and Prevention (CDC), has information and many resources. You can reach this center online   or by phone at 1-866-200-8098.

Learn more about psychotherapy .

Tips to help kids and adults with ADHD stay organized

Parents and teachers can help kids with ADHD stay organized and follow directions with tools such as:

  • Keeping a routine and a schedule. Keep the same routine every day, from wake-up time to bedtime. Include times for homework, outdoor play, and indoor activities. Keep the schedule on the refrigerator or a bulletin board. Write changes on the schedule as far in advance as possible.
  • Organizing everyday items. Have a place for everything, (such as clothing, backpacks, and toys), and keep everything in its place.
  • Using homework and notebook organizers. Use organizers for school material and supplies. Stress to your child the importance of writing down assignments and bringing home necessary books.
  • Being clear and consistent. Children with ADHD need consistent rules they can understand and follow.
  • Giving praise or rewards when rules are followed. Children with ADHD often receive and expect criticism. Look for good behavior and praise it.

For adults:

A professional counselor or therapist can help an adult with ADHD learn how to organize their life with tools such as:

  • Keeping routines.
  • Making lists for different tasks and activities.
  • Using a calendar for scheduling events.
  • Using reminder notes.
  • Assigning a special place for keys, bills, and paperwork.
  • Breaking down large tasks into more manageable, smaller steps so that completing each part of the task provides a sense of accomplishment.

How can I find a clinical trial for ADHD?

Clinical trials are research studies that look at new ways to prevent, detect, or treat diseases and conditions. The goal of clinical trials is to determine if a new test or treatment works and is safe. Although individuals may benefit from being part of a clinical trial, participants should be aware that the primary purpose of a clinical trial is to gain new scientific knowledge so that others may be better helped in the future.

Researchers at NIMH and around the country conduct many studies with patients and healthy volunteers. We have new and better treatment options today because of what clinical trials uncovered years ago. Be part of tomorrow’s medical breakthroughs. Talk to your health care provider about clinical trials, their benefits and risks, and whether one is right for you.

To learn more or find a study, visit:

  • NIMH’s Clinical Trials webpage : Information about participating in clinical trials
  • Clinicaltrials.gov: Current Studies on ADHD  : List of clinical trials funded by the National Institutes of Health (NIH) being conducted across the country
  • Join a Study: Children - ADHD : List of studies being conducted on the NIH Campus in Bethesda, MD

Where can I learn more about ADHD?

Free brochures and shareable resources.

  • Attention-Deficit/Hyperactivity Disorder in Children and Teens: What You Need to Know : This brochure provides information about attention-deficit/hyperactivity disorder (ADHD) in children and teens including symptoms, how it is diagnosed, causes, treatment options, and helpful resources. Also available en español .
  • Attention-Deficit/Hyperactivity Disorder in Adults: What You Need to Know : This brochure provides information about attention-deficit/hyperactivity disorder (ADHD) in adults including symptoms, how ADHD is diagnosed, causes, treatment options, and resources to find help for yourself or someone else. Also available en español .
  • Shareable Resources on ADHD : These digital resources, including graphics and messages, can be used to spread the word about ADHD and help promote awareness and education in your community.
  • Mental Health Minute: ADHD : Take a mental health minute to learn about ADHD.
  • NIMH Expert Discusses Managing ADHD : Learn the signs, symptoms, and treatments of ADHD as well as tips for helping children and adolescents manage ADHD during the pandemic.

Federal resources

  • ADHD   : CDC offers fact sheets, infographics, and other resources about the signs, symptoms, and treatment of children with ADHD.
  • ADHD   : (MedlinePlus – also available  en español   .)

Research and statistics

  • Journal Articles   : This webpage provides information on references and abstracts from MEDLINE/PubMed (National Library of Medicine).
  • ADHD Statistics : This web page provides statistics about the prevalence and treatment of ADHD among children, adolescents, and adults.

Last Reviewed: September 2023

Unless otherwise specified, the information on our website and in our publications is in the public domain and may be reused or copied without permission. However, you may not reuse or copy images. Please cite the National Institute of Mental Health as the source. Read our copyright policy to learn more about our guidelines for reusing NIMH content.

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox , Microsoft Edge , Google Chrome , or Safari 14 or newer. If you are unable to, and need support, please send us your feedback .

We'd appreciate your feedback. Tell us what you think! opens in new tab/window

CRediT author statement

CRediT (Contributor Roles Taxonomy) was introduced with the intention of recognizing individual author contributions, reducing authorship disputes and facilitating collaboration. The idea came about following a 2012 collaborative workshop led by Harvard University and the Wellcome Trust, with input from researchers, the International Committee of Medical Journal Editors (ICMJE) and publishers, including Elsevier, represented by Cell Press.

CRediT offers authors the opportunity to share an accurate and detailed description of their diverse contributions to the published work.

The corresponding author is responsible for ensuring that the descriptions are accurate and agreed by all authors

The role(s) of all authors should be listed, using the relevant above categories

Authors may have contributed in multiple roles

CRediT in no way changes the journal’s criteria to qualify for authorship

CRediT statements should be provided during the submission process and will appear above the acknowledgment section of the published paper as shown further below.

Term

Definition

Conceptualization

Ideas; formulation or evolution of overarching research goals and aims

Methodology

Development or design of methodology; creation of models

Software

Programming, software development; designing computer programs; implementation of the computer code and supporting algorithms; testing of existing code components

Validation

Verification, whether as a part of the activity or separate, of the overall replication/ reproducibility of results/experiments and other research outputs

Formal analysis

Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data

Investigation

Conducting a research and investigation process, specifically performing the experiments, or data/evidence collection

Resources

Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools

Data Curation

Management activities to annotate (produce metadata), scrub data and maintain research data (including software code, where it is necessary for interpreting the data itself) for initial use and later reuse

Writing - Original Draft

Preparation, creation and/or presentation of the published work, specifically writing the initial draft (including substantive translation)

Writing - Review & Editing

Preparation, creation and/or presentation of the published work by those from the original research group, specifically critical review, commentary or revision – including pre-or postpublication stages

Visualization

Preparation, creation and/or presentation of the published work, specifically visualization/ data presentation

Supervision

Oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team

Project administration

Management and coordination responsibility for the research activity planning and execution

Funding acquisition

Acquisition of the financial support for the project leading to this publication

*Reproduced from Brand et al. (2015), Learned Publishing 28(2), with permission of the authors.

Sample CRediT author statement

Zhang San:  Conceptualization, Methodology, Software  Priya Singh. : Data curation, Writing- Original draft preparation.  Wang Wu : Visualization, Investigation.  Jan Jansen :  Supervision. : Ajay Kumar : Software, Validation.:  Sun Qi:  Writing- Reviewing and Editing,

Read more about CRediT  here opens in new tab/window  or check out this  article from  Authors' Updat e:  CRediT where credit's due .

IMAGES

  1. PPT

    presentation of evidence definition

  2. PPT

    presentation of evidence definition

  3. Types Of Evidence Presentation

    presentation of evidence definition

  4. Evidence Meaning

    presentation of evidence definition

  5. PPT

    presentation of evidence definition

  6. PPT

    presentation of evidence definition

VIDEO

  1. Anecdotal evidence Meaning

  2. Understanding Evidence in Expository Essays

  3. Evidence based practice

  4. Citing Evidence to Support a General Statement

  5. What is evidence?

  6. Facts Which Need To be Proved

COMMENTS

  1. How Courts Work

    Steps in a Trial. Evidence. The heart of the case is the presentation of evidence. There are two types of evidence -- direct and circumstantial . Direct evidence usually is that which speaks for itself: eyewitness accounts, a confession, or a weapon. Circumstantial evidence usually is that which suggests a fact by implication or inference: the ...

  2. Compilation and Presentation of Evidence

    Compilation and Presentation of Evidence. Evidence is how you or the opposing party can prove or refute the facts in your case. When presenting evidence in a trial, it's essential to consider a series of recommendations to avoid problems in the final stages of the case, states our Head of Litigation and Arbitration Department, Rubén Rivas ...

  3. PDF Presentation of Evidence

    Presentation of Evidence The compelling presentation of evidence is a key dimension of a paper's quality. ASQ welcomes submissions from authors who think seriously about how to present their evidence in ways that make a paper easy to understand and compelling for readers. Part of researchers' craft is to draw

  4. PDF 29.5 Presentation of the Evidence

    The order in which a criminal jury trial proceeds is governed by G.S. 15A-1221. After a jury is impaneled and an opportunity for opening statements is given, the State must present evidence of the defendant's guilt, that is, its "case-in-chief.". See G.S. 15A-1221(a)(5). The State goes first because it has the burden of proof.

  5. Evidence

    Evidence, in law, any of the material items or assertions of fact that may be submitted to a competent tribunal as a means of ascertaining the truth of any alleged matter of fact under investigation before it. ... the presentation of documents or physical objects, or the assertion of a foreign law. The many rules of evidence that have evolved ...

  6. PDF 1 Introduction to the law of evidence

    The different categories of evidence that you will come across in your study of the law of evidence are outlined below. It is important to note that there is a degree of overlap between them, so they are not mutually exclusive. 1.4.1 Direct evidence Direct evidence is evidence which directly proves or disproves a fact in issue. An obvious

  7. Evidence

    Books, journals, websites, newspapers, magazines, and documentary films are some of the most common sources of evidence for academic writing. Our handout on evaluating print sources will help you choose your print sources wisely, and the library has a tutorial on evaluating both print sources and websites. A librarian can help you find sources ...

  8. Federal Rules of Evidence: Role of Judges in the Evidentiary Process

    Determine the Presentation of Evidence. If both authentication and admissibility are established, then the court must determine how the evidence will best be presented to the trier of fact, bearing in mind that the court is obligated to exercise control over the presentation of evidence to accomplish an effective, fair, and efficient proceeding.

  9. Evidence

    The presentation of evidence at trial is governed and regulated by the jurisdiction's rules of evidence. Types of Evidence Evidence comes in many forms, as by its very definition, evidence is any thing presented to prove that something is true.

  10. 10 Steps for Presenting Evidence in Court

    10 Steps for Presenting Evidence in Court. When you go to court, you will give information (called "evidence") to a judge who will decide your case. This evidence may include information you or someone else tells to the judge ("testimony") as well as items like email and text messages, documents, photos, and objects ("exhibits").

  11. How Courts Work

    Steps in a Trial. Presentation of Evidence by the Defense. The defense lawyer may choose not to present evidence, in the belief that the plaintiff or government did not prove its case. Usually, however, the defense will offer evidence. In a criminal case, the witnesses presented by the defense may or may not include the defendant.

  12. evidence

    evidence. Evidence an item or information proffered to make the existence of a fact more or less probable. Evidence can take the form of testimony , documents, photographs, videos, voice recordings, DNA testing, or other tangible objects. Courts cannot admit all evidence, as evidence must be admissible under that jurisdiction's rules of ...

  13. 6.2: Defining Evidence

    Evidence needs to be carefully chosen to serve the needs of the claim and to reach the target audience. An argument is designed to persuade a resistant audience to accept a claim via the presentation of evidence for the contentions being argued. The evidence establishes the amount of accuracy your arguments have.

  14. Presentation of Evidence T

    Presentation of Evidence THE power of administrative tribunals to disregard the common-law exclusionary rules of evidence has not resulted, as is often erroneously assumed, in their being utterly ignored in administrative proceedings involving the adjudication of judicial questions. In cases involving the dis­

  15. The Legal Concept of Evidence

    The second definition is contained in the United States' Federal Rule of Evidence 401 which ... 274). A further objection is that the management of parties' conduct relating to evidence preservation and presentation should be left to judges and not to the jury. What a judge may do to optimize evidential weight is to impose a burden of ...

  16. Use of Forensic Evidence in Trial

    This definition underscores the interdisciplinary nature of forensic evidence, emphasizing its reliance on scientific principles to uncover truths that may otherwise remain concealed within the complexities of criminal cases. ... This exploration of expert witnesses and the presentation of forensic evidence underscores the multidimensional ...

  17. Assertion-Evidence Approach: Rethinking Scientific and Technical

    Assertion-evidence talks are more focused, understood better by audiences, and delivered with more confidence. ... Christine Haas, a professional presentations instructor, discusses how to incorporate your own presentation into an assertion-evidence template. Hannah Salas, who is a undergraduate mechanical engineer from University of Nevada at ...

  18. PDF LAW OF EVIDENCE

    Evidence is defined as a means whereby any alleged matter of fact, the truth of which is submitted to investigation, is proved and includes statements by defendants, admission, judicial notices, presumptions of law, and observation by the court in its

  19. Presentation of Evidence

    Presentation of Evidence. Pursuant to 5 USCS § 556, an administrative law judge is authorized to regulate the course of a hearing. An administrative judge has broad discretion to allow or exclude witness testimony. [i] Moreover, the judge has the power to sequestrate witnesses to ensure that witnesses provide testimony without being influenced ...

  20. Presentation of evidence.

    Duplicate presentation of the same evidence should be avoided wherever possible. (d) Authenticity. The authenticity of all documents submitted as proposed exhibits in advance of the hearing shall be deemed admitted unless written objection thereto is filed prior to the hearing, except that a party will be permitted to challenge such ...

  21. Experiences of evidence presentation in court: an insight into the

    The ability to present complex forensic evidence in a courtroom in a manner that is fully comprehensible to all stakeholders remains problematic. Individual subjective interpretations may impede a collective and correct understanding of the complex environments and the evidence therein presented to them. This is not fully facilitated or assisted in any way with current non-technological ...

  22. presentation of evidence collocation

    Examples of presentation of evidence in a sentence, how to use it. 20 examples: Bayesian methods have significant advantages over classical frequentist statistical methods and the…

  23. evidence in chief Definition, Meaning & Usage

    Definition of "evidence in chief". It's the main set of facts or proof presented by one side to establish their argument or claim. How to use "evidence in chief" in a sentence. The lawyer prepared thoroughly for the presentation of the evidence in chief. The judge reminded the party that any omission in the evidence in chief could be ...

  24. Verifiable Credentials Data Model v1.1

    A verifiable presentation is a tamper-evident presentation encoded in such a way that authorship of the data can be trusted after a process of cryptographic verification. ... The precise content of each evidence scheme is determined by the specific evidence type definition. Note.

  25. Using an iPad to Present Electronic Evidence in the Courtroom

    Thanks to the iPad and associated apps, presenting evidence in the courtroom requires a smaller team and much less upheaval than was necessary in the past. There are two basic methods for presenting from an iPad: wired and wireless. You don't need a presentation-specific app to show documents on the iPad.

  26. Healthy People 2030

    Evidence-Based Resources. Healthy People 2030 provides hundreds of evidence-based resources to help you address public health priorities. Browse evidence-based resources. Healthy People in Action Spotlight. Healthy People News and Events

  27. Attention-Deficit/Hyperactivity Disorder

    Some people with ADHD mainly have symptoms of inattention. Others mostly have symptoms of hyperactivity-impulsivity. Some people have both types of symptoms.

  28. CRediT author statement

    Definition. Conceptualization. Ideas; formulation or evolution of overarching research goals and aims. ... specifically performing the experiments, or data/evidence collection. Resources. Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools ...