. |
|
Christine Haas, a professional presentations instructor, discusses how to incorporate your own presentation into an assertion-evidence template. | Hannah Salas, who is a undergraduate mechanical engineer from University of Nevada at Las Vegas, summarizes her NSF research experience for undergraduates (REU). This research experience occurred at Penn State. |
|
|
University Park, PA 16802 | |
(a) Testimony. Where appropriate, the Presiding officer may direct that the testimony of witnesses be prepared in written exhibit form and shall be served at designated dates in advance of the hearing. Evidence as to events occurring after the exhibit-exchange dates shall be presented by a revision of exhibits. Witnesses sponsoring exhibits shall be made available for cross-examination. However, unless authorized by the presiding officer, witnesses will not be permitted to read prepared testimony into the record. The evidentiary record shall be limited to factual and expert opinion testimony. Argument will not be received in evidence but rather should be presented in opening and/or closing statements of counsel and in briefs to the presiding officer subsequently filed.
(b) Exhibits. All exhibits and responses to requests for evidence shall be numbered consecutively by the party submitting same and appropriately indexed as to number and title and shall be exchanged on dates prior to the hearing prescribed in the prehearing rulings. Written testimony should be identified alphabetically. Two copies shall be sent to each party and two to the presiding officer. No response to a request for evidence will be received into the record unless offered and received as an exhibit at the hearing. The exhibits, other than the written testimony, shall include appropriate footnotes or narrative material explaining the source of the information used and the methods employed in statistical compilations and estimates and shall contain a short commentary explaining the conclusions which the offeror draws from the data. Rebuttal exhibits should refer specifically to the exhibits being rebutted. Where one part of a multipage exhibit is based upon another part, appropriate cross-reference should be made. The principal title of each exhibit should state precisely what it contains and may also contain a statement of the purpose for which the exhibit is offered. However, such explanatory statement, if phrased in an argumentative fashion, will not be considered as a part of the evidentiary record. Additional exhibits pertinent to the issues may be submitted in a proceeding with the approval of the presiding officer.
(c) Cooperation on basic data. Parties having like interests are specifically encouraged to cooperate with each other in joint presentations particularly in such items as basic passenger, cargo, and scheduling data compiled from official or semiofficial sources, and any other evidence susceptible to joint presentation. Duplicate presentation of the same evidence should be avoided wherever possible.
(d) Authenticity. The authenticity of all documents submitted as proposed exhibits in advance of the hearing shall be deemed admitted unless written objection thereto is filed prior to the hearing, except that a party will be permitted to challenge such authenticity at a later time upon a clear showing of good cause for failure to have filed such written objection.
(e) Statement of position and trial briefs. A written statement of position should be exchanged by all counsel with copies to all other parties prior to the beginning of the hearing: Provided, however, That Public Counsel or counsel for a public body which has intervened as its interests may appear, may offer his statement of position at the conclusion of the evidentiary hearing, unless such is impracticable. This statement should include a showing of the theory of the case of the party submitting the statement and will not be subject to cross-examination. Trial briefs are acceptable but will not be required.
Egyptian Journal of Forensic Sciences volume 10 , Article number: 8 ( 2020 ) Cite this article
13k Accesses
7 Citations
1 Altmetric
Metrics details
The ability to present complex forensic evidence in a courtroom in a manner that is fully comprehensible to all stakeholders remains problematic. Individual subjective interpretations may impede a collective and correct understanding of the complex environments and the evidence therein presented to them. This is not fully facilitated or assisted in any way with current non-technological evidence presentation methods such as poor resolution black and white photocopies or unidimensional photographs of complex 3D environments. Given the wide availability of relatively cheap technology, such as tablets, smartphones and laptops, there is evidence to suggest that individuals are already used to receiving visually complex information in relatively short periods of time such as is available in a court hearing. courtrooms could learn from this more generic widespread use of technology and have demonstrated their ability to do so in part by their adoption of the use of tablets for Magistrates. The aim of this current study was to identify the types of digital technology being used in courts and to obtain data from police personnel presenting digital evidence in court.
A questionnaire study was conducted in this research to explore current technology used within courtrooms from the perspective of crime scene personnel involved in the presentation of complex crime scene evidence. The study demonstrated that whilst many of the participants currently utilize high-end technological solutions to document their crime scenes, such as 360° photography or laser scanning technologies, their ability to present such evidence was hindered or prevented. This was most likely due to either a lack of existing technology installed in the court, or due to a lack of interoperability between new and existing technology.
This study has contributed to this academic field by publishing real life experiences of crime scene examiner’s, who have used advanced technology to record and evaluate crime scenes but are limited in their scope for sharing this information with the court due to technological insufficiency. Contemporary recording techniques have provided the opportunity for further review of crime scenes, which is considered to be a valuable property over previous documentation practice, which relied upon the competency of the investigator to comprehensively capture the scene, often in a single opportunity.
The delivery of evidence in the UK Courts of Law in part involves extensive oral descriptions of events and evidence from an investigation, which can be a time consuming and laborious task (Schofield 2016 ). In terms of evidence relating to a crime scene, verbal statements, printed photographs and sketches of the scene may be used (Lederer 1994 ; McCracken 1999 ).
Conveying evidence from a scene, which both experts and laypersons can fully understand, remains an “ever-difficult task” (Chan 2005 ). This is because individuals may misinterpret or find difficulty in understanding the information being described to them (Schofield and Fowle 2013 ). It is entirely likely that cognitive processes contribute to variance in the interpretation of the evidence amongst listeners, and perhaps unsurprisingly, a survey conducted by the American Bar Association ( 2013 ) has demonstrated that significant volumes of technical information or complex facts can not only overwhelm the jury, but also often confuses them, leaving them feeling bored and frustrated (Kuehn 1999 ; Schofield 2009 ). In turn, this can present difficulties in absorbing and retaining information (Krieger 1992 ). Lederer and Solomon ( 1997 ) noted an increase in people’s attention when moving object displays were used in the courtroom.
There have been research studies which have investigated and considered the effects and impact that evidence presentation methods may have on jurors’ decisions in the courtroom (Schofield 2016 ; Schofield and Fowle 2013 ; Dahir 2011 ; Kassin and Dunn 1997 ; Dunn et al. 2006 ; Schofield 2011 ). Alternative research has started to develop our understanding of the effects that technology may have on jurors and the decisions which they make in the courtroom (Burton et al. 2005 ). Whilst visual presentation methods offer significant advantages in presenting complex evidence in an understandable way, research would suggest that such methods could also mislead, or unfairly persuade a jury (Schofield 2016 ; Burton et al. 2005 ).
Manlowe ( 2005 ) details the practical considerations which need to be made before introducing visual presentations into the courtroom, such as whether the technology installed permits graphical displays to be presented. Manlowe ( 2005 ) advocates the use of visual evidence in the courtroom in combination with oral presentations, as it has been found that jurors can retain six times as much information when compared with just oral presentations alone. Schofield and Fowle ( 2013 ) also extensively described the advantages and disadvantages associated with different graphical technologies for presenting evidence in the courtroom, and provided guidelines for using such evidence.
Given the availability of technical devices, such as tablets, smartphones and laptops, there is some evidence to suggest that individuals are used to receiving high-impact information in relatively short periods of time (Manlowe 2005 ; Pointe 2002 ). This information is highly visual, and as it utilizes technology might suggest that members of the court, including the jury, are equipped for a shift towards an increase in the quantity of visual data and technological advancement. It might also suggest that traditional methods of presenting evidence relating to a crime scene, such as sketches and photographs lack the flexibility and ability to deliver the intended information in a comprehensive manner. According to Manlowe ( 2005 ), basic demonstrative exhibits in the courtroom were time consuming and expensive and were limited in their ability to be edited. Technological advancements in the presentation of crime scene evidence include scene recording and visualization (Schofield 2016 ). Such technology ultimately aims to facilitate effective and rapid communication of crime scene environments between users within law enforcement agencies and in court (O’Brien and Marakas 2010 ; Manker 2015 ).
The presentation of forensic evidence using reconstructed virtual environments, such as computer-generated (CG) displays and virtual reality (VR) have been developed through the necessity to improve jurors’ understanding of complex evidence without technical, jargon-filled explanations. It is thought that jurors place more credibility on what they can “see and touch” (Schofield 2009 ). Virtual environments present unique opportunities to visually illustrate a scene, with the ability to “walk through” and virtually interact with the environment, and this can be more compelling for juries (Agosto et al. 2008 ; Mullins 2016 ). Howard et al. ( 2000 ) explored the use of virtual reality to create 3D reconstructions of crime scenes and demonstrated that the system they introduced made the evidence being presented to them easier to comprehend, and substantially shortened the length of trials.
Panoramic photography is another means of technological advancement that has been used to aid the presentation of crime scene evidence. In 2014, a 360° panorama was used to demonstrate material as part of a murder trial. The jury in Birmingham experienced a virtual “walk through” of a scene for a murder trial, created using an iSTAR® panoramic camera (NCTech). Warwickshire Police have used an iSTAR® camera to document serious road traffic collisions (RTCs), which contributed to the evidence revealed during the trial of Scott Melville for the murder of Sydney Pavier. Principal Crown advocate of the Crown Prosecution Service, Peter Grieves Smith commended the technology used stating “It was invaluable footage that greatly assisted the jury in understanding the layout of the property. It will surely become the norm to use this in the future in the prosecution of complex and grave crime”. Judge Burbidge QC also commended Warwickshire Police for their professional pursuit of justice in this case.
Reportedly, the state of courtroom technology integration differs significantly around the world (Manker 2015 ; Reiling 2010 ; Ministry of Justice 2013 ). Basic technology, such as tablets and television screens are being used within some courtrooms in the USA and Australia (Schofield 2011 ) with a limited number integrating more high-end technological solutions, such as CG presentations in the USA (Chan 2005 ). The integration of technology within the UK courtrooms is still in its infancy and is a significantly slower process than the USA or Australia (Schofield 2016 ). As part of a strategic new plan introduced in 2014, the UK criminal justice system was due to be transformed through digital technology. The plan sought to make courtrooms “digital by default” with an end to the reliance on paper by 2016, and to provide “swifter justice” through the digital dissemination of information (Ministry of Justice 2013 ). The ultimate aim was to digitize the entire UK criminal justice system by 2020, to simplify processes and improve efficiency. In 2013, Birmingham’s Magistrates court produced the UK’s first digital concept court, a courtroom that trialled technology to aid in the speed and efficiency of trials using laptops to store electronic case files as opposed to large paper folders, and to facilitate the sharing of files with other members of the courtroom.
In 2016, the UK National Audit Office conducted an investigation to determine the current situation of courtrooms in terms of the digital reform. Results demonstrated how some parts of the criminal justice system were still heavily paper based, creating inefficiencies. The report concluded that the time frames that were originally employed, were overambitious (National Audit Office 2016 ).
The aim of this study was to explore the current situation regarding technology use in courtrooms from the perspective of persons involved in the presentation of crime scene evidence, and to explore barriers and facilitators to its greater and effective use. In this study, the following objectives were considered: to establish the state of current literature associated with the use of technology in courtrooms; to obtain data regarding the experiences of the UK police service personnel with respect to presenting digital evidence in courtrooms; to identify the types of technology that are currently being utilized in courtrooms in the UK; to seek the opinions of police service personnel with regard to digital technology use in the courtrooms and to use these outcomes to define a fresh starting point to debate the exploitation of digital technology use in the UK courtrooms to facilitate more efficient, better value for money and robust judgements with complex forensic content.
The study has focused on the experiences of crime scene personnel because of the advancements of technology in this particular area, such as the use of 360° photography and laser scanning. The subject area also falls within the remit of the research team. By sharing opinions and experience, the paper hopes to aid both legal professionals and police service personnel to a more comprehensive understanding of the current use of technology in the courtroom, the advantages which technology can provide to their case, and the barriers which have been affecting the adoption of technology.
A qualitative phenomenological research study was conducted to explore the experiences of police service personnel regarding the current use of information technology in courtrooms and in their experience of evidence presentation. The sample group included vehicle collision investigators and forensic photographers/imaging technicians. A snowball sample of 21 police service personnel from England and Wales and Australia were recruited via email and a UK police forum for participation within this study. It was considered useful to recruit participants from these countries because of the similarities with their respective criminal justice systems (McDougall 2016 ) but where differences in the rate of technology integration had also been previously reported (Schofield 2016 ) which could offer meaningful and experience based solutions in technological advancement.
Participants were required to formally consent to participation in line with the ethical requirements of the host institution. Participants were emailed a semi-structured, open-ended questionnaire and were asked to type or handwrite their responses. The questions asked were as follows:
What is your job title and role within the criminal justice system?
As part of your role, are you required to present evidence in a courtroom?
Can you tell me what, if any, technology has been integrated into the courtroom?
What has your experience been in terms of the introduction of new technology into the courtroom?
Have there been any difficulties with technology being integrated into the courtroom?
With the implementation of technology with existing and current courtroom systems?
And whether there have been barriers, if any, to the adoption of such technology?
If there has not, why do you think this is?
In terms of the current methods with which forensic evidence is presented in court, do you think anything needs to be changed? Please explain.
What has your experience been with the presentation of evidence in court? Please explain.
New technology is becoming available to police services and forensic services for the documentation and presentation of crime scenes. 360° photography or laser scanning is being implemented into police services to speed up the data capture as well as to capture more detail and information from the scene.
Have you had any experience in this area—do you yourself use these methods for documenting crime scenes?
Have you ever had to present this type of evidence in court? Please explain.
What has the response been to this method of presenting evidence
From the judges?
Barristers?
The jury members?
Is the courtroom fully equipped to allow you to present this type of evidence? Please explain.
Do you feel there is anything, which needs improvement? Please explain.
Can you give me your opinion on presenting evidence in this manner? Advantages/disadvantages.
Thematic analysis based on Manker ( 2015 ) methodology, originally adapted from Guest et al. ( 2012 ), was used to analyse the data that was collected from the 21 participants. The data analysis consisted of breaking down and coding the text responses obtained from the participants’ questionnaires, to identify themes and to construct thematic networks. A computer software program NVivo was used to store, organize and code the open-ended data collected from participants. Participant text responses were re-structured within an Excel spread sheet and the data set uploaded into the NVivo software. The data was explored using the NVivo software through word frequency queries to analyse the most frequently used words in the participant data. Emerging themes were identified and coded using specific keywords or “nodes”. Nodes were created based on these recurring themes, and any responses were coded at the relevant nodes. For example, for question 11 which asked the participants “What has the response been to this method of presenting evidence”, potential responses from participants could suggest a good response, a bad response, little response, no response or not applicable. These identified nodes would allow the researcher to link a node to the relevant response from participants. Within the NVivo software, the researcher could search nodes and easily identify all participants who had the same response. This was used to analyse the different themes identified within the participant data. As the analysis of the data progressed, new nodes were identified and these were checked against all other participants.
Thematic categories were determined by the researchers: to include courtroom technology, ease of use, implementation, limited use, recommendations, advantages and disadvantages. Some of the thematic categories were further broken down to include additional related categories. For example, courtroom technology was further broken down to include specific categories such as television screens, audio-visual technology, computers, 360° photography and laser scanning.
The nodes were associated with the thematic categories described above. The participant responses were analysed, described and tables created which documented the number of respondents to have reported such a response relevant to the nodes. The nodal frequency within each theme was used to determine the existence of trends within the data.
The purpose of this qualitative phenomenological research study was to explore and describe experiences of police service personnel with responsibilities within crime scene examination with regard to the current use of technology within the courtroom. This research covered over one third of the total 43 police services within England and Wales (15 services), as shown in Fig. 1 . Each police service has their own policy and procedures for conducting criminal investigations and as such different individuals within the same police service would likely follow the same procedures.
Map to show the 15 police service regions represented by the participants who completed the questionnaire (highlighted in purple). Adapted from original by HMIC
Although the use of questionnaires allowed exploration of the participants’ experiences regarding the use of technology in the courtroom, they restricted further explanation or prompts for more detail which would be available in interviews. The authors accept that participant responses to questions that are likely to change based on different stimuli, such as the context of the request and their mood, in addition to what information they could recall from memory at that particular time. Consequently, participants may not recollect a particular experience or event at the time that they completed the questionnaire, and as a result may not mention it. In response to this, the paper presents a thematic analysis of the data, where collective themes are presented based on responses from the entire sample group rather than isolated incidents.
A consideration for the authors throughout the study related to the opportunities for participants to respond to questions in a manner that would be viewed favourably. This is termed “social desirability bias” (Manker 2015 ; Saris and Gallhofer 2014 ). As a result, participants may have been inclined to over exaggerate “good behaviour” or under report “bad behaviour”. Reportedly, the effects of social desirability bias is reduced in situations where an interviewer is not present, which is why, in part, the experimental design included questionnaire data. When the data was analysed, six themes were identified. These were “current technology in the courtroom”, “lack of technology in the courtroom”, “difficulties/barriers associated with the integration of technology into the courtroom”, “improvements/changes that are required”, “the future of courtroom technology” and “360° photography and laser scanning”.
Within the first theme, participants were asked about their experiences of technology within the courtroom, which prompted responses that described the use of television screens, DVD players/CCTV viewing facilities, basic PC’s/laptops, paper files, photographs, basic audio-visual systems, live link capability, projectors and the specialist software to view 3D data. Four participants described how the current technology within the courtroom was limited to that of traditional paper files and printed albums of photographs. Given the use of the term “technology” within the question, the answers that were given were perceived to describe very basic methods, and some of the participants equally commented that “the courts need to catch up”. Those courtrooms that had initiated technology into trials had implemented what many participants claimed to be “basic and limited audio-visual technology” .
The UK National Audit Office ( 2016 ) identified that courtrooms have been slow to adopt technology and still heavily rely on paper files, which has worked for many years. The experiences described by the participants in this study would support these findings. The reason paper files have worked for many years could be attributed to the fact that people like to have something in their hands that they can see in front of them. Paper files and photographs allow a jury to look closely and examine what they are being shown, compared with distance viewing of a screen. However, printing photographs often leads to a loss in clarity and detail, which could make it more difficult to interpret what they are seeing. Often, it is the case that something may be visible on screen in a digital photograph that is not visible once recreated through print.
According to the data, the type of court and crime was a factor which determined whether any technology was implemented, and the type of technology that was implemented. For one participant, the majority of their cases were produced for the coroner’s courts, who were reportedly “yet to embrace” new evidential technology. It was also noted, however, that although slow to embrace technology, in the majority of cases at the coroner’s court, it was not needed.
According to the results of this study, little technology had reportedly been implemented into the courtrooms. One participant stated that, “there has been little investment by the courts in modern technology” and “generally there hasn’t been any [implementation] and under investment seems to have been the greatest problem”.
Some of the participants described how limited technology had negatively impacted upon their ability to appropriately present evidence in court. In one instance the following scenario was described:
I was presenting evidence on blood spatter in court. The jury were looking at photocopies taken from the album of blood spatter on a door. So I had to ask the jury to accept that there were better quality images where the spatter could be seen and I was able to interpret the pattern. Not only does this allow a barrister to claim I was making it up but, it is much easier to explain something if people can see it.
A similar experience was reported by another participant, who took personal measures to aid their presentation of evidence:
I had to show each individual juror an original printed photograph from the report I had brought with me as those provided in their bundle were of such poor quality that the subject of my oral evidence was not clearly visible to them.
Primarily evidence is verbal, [and that the] presentation of photographs are by way of rather dodgy photocopied versions lovingly prepared by the Crown Prosecution Service (CPS).
The significance of these statements relates to the potential for the evidence under presentation to be misunderstood or unfairly dismissed, which has implications for the case. These experiences would suggest that the most basic opportunities to provide equivalent quality photographs to the jury were missed. Forensic evidence is often highly visual, and even with an articulate speaker and extensive descriptive dialogue, the ability to effectively communicate the appearance and location of evidence such as blood spatter is likely to be strengthened by effective visual aids. Aside from high quality photographs, alternative digital presentation methods, such as portable screening devices may have provided an appropriate and just communication of the evidence.
Burton et al. ( 2005 ) and Schofield ( 2016 ) each made reference to the effects of visual presentation methods on jurors’ interpretation of evidence. In this research, reference has been made to actual evidence and not reconstructed scenarios; therefore, in our opinion, visual presentation opportunities to illustrate complex evidence such as blood spatter is only likely to improve jurors’ understating of the evidence being presented to them. It may also improve jurors’ retainment of information, as demonstrated by Manlowe ( 2005 ).
Paper files in the courtroom are still heavily relied upon, with the UK’s Crown Prosecution Service (CPS) producing roughly 160 million sheets of paper every year (Ministry of Justice 2013 ). In addition to the limited presentation quality of photocopied images, printed copies of two dimensional presentations were also criticized in terms of their inability to interact with jury members, as follows:
Tend to be clumsy and fill the witness box with paper that is pointed to in front of the witness and this is never conveyed to the jury.
If, maybe through the use of tablets, or some form of interactive media, this could be displayed on screen, then the witnesses’ thoughts and explanations may be better conveyed to the jury.
For other participants, the use of printed paper was seemingly appropriate:
For most cases, a simple 2D plan and photographs is more than sufficient. There is the ability to produce flashy reconstruction DVD’s, but I think there is a huge danger of a reconstruction showing things that did not happen, putting images to the court and jury that may only be a representation of a possible scenario rather than what is definite. This is particularly true for collision investigation where there are often unknowns and using a computer model cannot be certain that is what happened. Videos shown are talked through as they are run.
In this instance, the opposite explanation appears to be true. Here, the participant is suggesting that technology could facilitate the presentation of inappropriate and misrepresenting evidence, equally impacting negatively on the case. This would reasonably support the idea that the use of technology should be considered in the context of the evidence under presentation, and/or used in instances where facts are being communicated. The experiences described by this participant implied that the photographs that they had used had adequately supported the presentation of their evidence.
In cases where multiple types of evidence were being presented, the need for technology reportedly varied, but its availability was also restricted for some participants.
One participant described,
to date, I haven’t used any visual aids/props. Generally, I will have compiled a report, which contains photographs and a scale plan, but as part of the wider investigation there may be digital data such as CCTV footage, 3D laser scans and animated reconstructions. My evidence is given orally and the relevant sections of the jury bundle referred to for context. I have presented a case involving CCTV footage which was played on too small a screen for the jurors to see properly, therefore making it difficult for them to understand the intricacies of what it showed. The footage itself had to be provided in a format that could be played in a DVD player present in the courtroom, leading to an overall reduction in quality.
The restrictive nature of this environment for the presentation of CCTV evidence is surprising in a society that thrives on visual media. In this example, the presentation of evidence has been compromised for the cost of a larger screen, or the distribution of visual display devices, such as tablets. In terms of operation, these devices simply need to facilitate functions such as “play”, “stop” and “pause”. If there is a concern that jury members may be unable to comply, there are options to screen mirror devices, thus giving control to a single competent user. It was reported by an Australian participant that some courtrooms already had individual screens for each jury member. Many courtrooms in the USA had also installed multiple computer screens or individual tablets for the jury so that evidence was more easily viewed (Schofield 2016 ; Wiggins 2006 ).
One of the UK participants claimed that,
until the improvement of the visual aids for the jury i.e. much larger or closer/individual monitors are implemented even the products we provide at the moment are of limited use in the courtroom.
Any concern over difficulties with technology operation by jury members should be considered alongside the fact that according to the Office of Communications (Ofcom), in 2017, 76% of adults living in the UK had a smartphone; therefore, the authors question whether courtroom technological advancement should account for this and look at the cultural shift in technology. This was supported with the data, where a participant made reference to the introduction of technology into the courtroom stating how it can
depend very much on the attitudes of the judge, prosecutors and investigators. Some are technologically averse whilst others are happy to accommodate new technology.
In the USA, the courtroom 21 project (founded in 1993) has sought to address issues with technology integration into courtrooms by active research, demonstrating the software and hardware to users, as well as discussing ideas for use in court. This could be a useful learning opportunity for alternative justice systems moving forward, given that an evaluation of US courts in Rawson ( 2004 ) revealed some similarity between the US and UK current practice. There is some evidence to suggest that evidence presentation in the USA is similarly restricted by technological advancement.
The use of live links or videoconferencing, which allows expert witnesses to present their testimony off site was reported by two participants. This type of technology is widely used within courtrooms by police officers that can remain working until required to present evidence, to interview vulnerable witnesses, and to arrange suitable dates for a defendant’s trial. This is believed to save time and money transporting defendants to the courtroom location for hearings.
This study highlighted some of the difficulties participants had experienced with the integration of technology into the courtroom and problems arising with the already installed basic courtroom equipment. One participant described,
people always seem to be finding their feet when trying to play with digital evidence, making things connect and work. Also, the actual devices are not always reliable
A lack of training and knowledge regarding existing technology was identified by several participants. One participant described the frustrations of the situations when technology was not operated correctly, describing,
the court clerk always seems to have difficulty getting the existing system to work correctly, albeit a DVD player. It is a great source of frustration for all involved.
we occasionally use video footage, which has to be converted to DVD format to play at court –assuming the usher knows how to work it.
This raises a training issue within courtrooms, which was supported by the Rt Hon Sir Brian Leveson in his review of efficiency in criminal proceedings (Leveson 2015 ). In this document, the Rt Hon Sir Brian Leveson highlighted the requirement for judges, court staff and those individuals who have regular access to courtroom technology to be sufficiently trained. In addition, he highlighted the need for technical assistance to prevent underutilisation of technology due to technological failures, or defective equipment, which often delay proceedings (Leveson 2015 ). In 2014, 13 cases in Crown court and 275 in Magistrates were postponed because of problems with technology. The National Audit Office ( 2016 ) reported that the police had so little faith in the courts equipment that they hired their own at a cost of £500 a day.
Issues regarding the compatibility of technology in the courtroom and a lack of staff training are not restricted to the UK. A report generated by the Attorney General of New South Wales, Australia, identified the same issues arising from technology in the courtroom (Leveson 2015 ; NSW Attorney Generals Department 2013 ).
Participants’ reported lack of investment/funding as the most commonly occurring “barrier”. According to one participant,
Under investment seems to have been the greatest problem; we have the opportunity to bring 3D interactive virtual scenes to the courtroom for example, however the limited computing power available means that this is impossible and there is little or no will on the part of the Ministry of Justice (MoJ) to invest in this technology.
CPS protocol is resistant to change and it also requires funding.
This supports the work of Manker ( 2015 ), who found that participants considered cost of equipment to be the main reason for the limited use of technology. Although technology may be expensive to purchase in the first instance, the significant returns should outweigh the initial expenditure. For example, technology aided trials may aid juries in understanding evidence, reaching a verdict and thus bringing the case to a close more quickly, reducing case costs and allowing more trials to be conducted concurrently (Marder 2001 ). In addition, there are benefits that cannot be quantified, such as juror satisfaction and engagement through the use of technology over laborious descriptions.
Barriers can also include a resistance to change or a lack of acceptance. One participant commented on the reluctance of individuals to accept new technology;
barriers include reluctance of some judges, investigators and lawyers to consider or implement newer technologies into their investigation or courtroom presentation … these challenges are reducing as time progresses and the technologies are increasingly established and the general paradigm is altered.
In some circumstances it may be necessary to integrate newer systems alongside, or in conjunction with, already existing equipment effectively. In many cases, the technologies may not be compatible, as evidenced through one participant’s response, who described,
the current systems seem incapable of keeping up with the advance on modern technologies or simply do not work more often than not.
Leveson ( 2015 ) found that many judges were in favour of exploiting technology in order to aid in the efficiency of the criminal justice system but had doubts regarding the ability to adapt current technology and its capacity to undertake its current duties.
This is not seemingly consistent with some participants’ experiences of technology outside of the courtroom, but within their investigative roles fear of technology and change also presents a barrier to the adoption of technology, particularly the risks associated with such technological change. Some changes may be successful, and others may not, but until these changes are made, it is impossible to know the outcomes of the technology use and what it can provide to the courtroom (Marder 2001 ).
There is some suggestion that technological change within courtrooms will be adopted. A report by the Ministry of Justice ( 2016 ) explains how the entire UK criminal justice system is being digitized to modernize courts using £700 million government funding. The funding aims to create a new online system that will link courts together. The digitisation of the UK criminal justice system is due to be completed in 2019, and an influx of funding should enable more rapid adoption of technology into the courtrooms.
Seven participants commented that no change in the courtroom was necessary with regards to technology. For example,
I think current methods are sufficient and like I said anything more complicated we provide our own laptop for.
As discussed, the technological requirements for evidence presentation are case specific, which is likely to be more prevalent in areas that utilize technology such as 360° photography and laser scanning.
Eight participants commented that a significant technological upgrade was required within courtrooms to cope with the ever-increasing demand of technology. This was emphasized in the following quotes:
The majority of courtrooms need a radical update. I’d hope that those being built now incorporate the required technology; however, I wouldn’t count on it,
the courts need full modernising,
the basic court infrastructure needs upgrading to allow it to handle the significant increase in demand that comes with the use of 3D animations software,
the court process has changed very little in the 12 years I have been a collision investigator whilst the equipment we use and evidence we produce has changed exponentially.
The adoption of technology to aid with the documentation and recovery of evidence from crime scenes by police services can only support effective evidence presentation with the alignment of such technological advancements in the courtroom. Failure to align technology could mean that such evidence is unlikely to be presented in its most effective format. This change could be alleviated with the standardization of file formats. According to one participant,
standardisation of digital formats used in the courtrooms would help in the preparation of evidence knowing which format to use when supplying evidence, to police and the courts. The most common remark we get from police and the courts regarding digital file formats is “can you supply or convert this or these files to a usable format, we just need it to be playable in court”.
Participants were asked about their thoughts on the future of evidence presentation. Virtual reality (VR) featured within several responses, with the idea being that courtroom users could be transported to a scene, allowing them to view and navigate themselves through it in 3D. Research has been conducted to investigate the use of VR courtrooms, whereby jurors wear VR headsets and are transported to the crime scene, allowing them to explore the scene (Bailenson et al. 2006 ; Schofield 2007 ).
In this study, one participant commented that,
When presenting evidence in an innovative way it generally means in a way that is better for the jury to understand, and that means clarity.
This will provide the ability for jurors, judges and the coroner to revisit a scene without leaving the courtroom and see things from the perspective of various people involved (victim, accused, witnesses).
In terms of its overall aim, one participant commented,
The aim is surely to assist the jury with understanding the complexities of the crime scene and to do that they need to be able to visualise the location and the evidence identified within it so I believe the future of a courtroom will be to provide this as realistically as possible.
This participant does not state what technology will be used to provide this experience to the jury only that the visual evidence will need to be as realistic as possible.
The effectiveness of VR technology for evidence presentation is likely to encourage debate, given the clarity with which crime scenes can be presented, but with the consideration of contextual information and its effects on juror response.
There will however be a fine line between giving a jury enough information with which to make an informed decision and traumatising them in vivid technicolour. Technology should not be adopted for the sake of it as this could have profound effects on the trials outcome. Any evidence presented in a courtroom needs to describe the incident that occurred in a manner which is easily understandable.
Although the perceived benefits of the technology were discussed by some, other participants commented on how VR was “still a long way off from being used for evidence”. Issues regarding the persuasive impact of demonstrative evidence have already been explicitly expressed with regard to 360° photography and laser scanning (Narayanan and Hibbin 2001 ). Other researchers claim that such evidence can lead a jury to blindly believe and accept the evidence, as shown in the work of Schofield and Fowle ( 2013 ) and Selbak ( 1994 ). Consequently, the use of visual presentation using CG could have profound implications on the case outcome if the jurors instantly believe what they are seeing. Evidence presented in such a way must remain scientifically accurate and truthfully reflect the scientific data and augment witness testimony (Manker 2015 ). This was supported by participant comments regarding the probative value of the evidence. Here,
the probity value is yet to be determined, in addition to juries not being allowed on many occasions to witness certain graphic images for fear of being overly influenced. Virtual reality would compound this.
Another participant commented that,
it may be perceived as entertainment rather than a judicial process.
Given the considerable amount of technology available with respect to crime scene documentation, such as 360° photography and laser scanning, and the expertise of the participant group, participants were asked to describe their experiences of such technological advancements.
Most participants (18 out of 21) described how their respective police services currently utilize 360° photography or laser scanning methods to document their crime scenes, but due to limitation of the court, facilities were unable to present such evidence to the courts. In such situations, 3D laser scan data was used to create 2D plans which were then printed for the court. This was criticized by one participant, who expressed their opinion on having to print 2D plans as,
a travesty really when you consider what capability this data offers.
Often, such technology requires access to a data cloud, which raised an issue for two participants for evidence presentation.
One participant stated that it is,
unfortunate as the benefits of the data cloud as a contextual visual aid are unrivalled. In situations where the 3D data was allowed, it was only accepted into the court as a 3D animated “fly-through” played directly from a DVD. This participant stated that using this DVD method it was not possible to move through the scene in real time.
One participant did report being able to successfully present their 360° panoramas.
I was the first to show 360° panoramas along with point cloud data. I had to explain to the court what it was and how it was used prior to the case commencing. We have presented this type of evidence now in live court 3 times and received no criticism. There have been at least another 3 cases where we have produced it but not required to show it. It does require some advanced preparation and several visits to the court room to be used, to make sure it all works.
With the Ministry of Justice driving the adoption of technology and providing significant funding to ensure the uptake of technology by courtrooms, it is inevitable that courtrooms will become “digital by default”. This will provide a more efficient CJS and allow information transfer to become more seamless.
The results of the qualitative phenomenological research in this study identified six key themes from the responses of participants, representing 15 of the current 43 UK police services. The themes covered the “current use of technology in the courtroom”, “lack of technology in the courtroom”, “difficulties/barriers associated with the integration of technology into the courtroom”, “improvements/changes that are required for technology integration”, “the future of courtroom digital technology”, and “360° photography and laser scanning”. The participants reported a general lack of technological integration within any court environments. It was clear that a significant change is required to existing courtrooms and their infrastructure to allow the use of existing technology to be utilized effectively, particularly for crime scene documentation, such as 360° photography or laser scanning from crime scenes or of evidence types. These areas, along with virtual reality represented aspects which participants believed would describe future-proofed courtrooms. However, concerns were voiced by the study group questioned, over the contextual influence that immersive technology may potentially cause and questioned the need to expose jurors to such information. Clearly, not only does digital-technological development within the courtroom require consideration, the attendant psychological benefits and ethical aspects also require developing in parallel to make the use of digital technology a fully useful and integrated feature in the decision-making process of Jurys and the UK courts and to provide a digital end-to-end common platform. As part of the ethical concerns to be addressed and those of “evidence continuity and potential contamination” of data, the opportunity that may exist to manipulate visual images needs to be carefully explored and future-proofed into any systems being developed. The authors firmly believe and attest that there is considerable scope for exploring this area further, although realize that the restricted access for courtroom presentation are likely, which limits the academic study of this area.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Computer generated
Criminal justice system
Crown Prosecution Service
Virtual reality
Agosto E, Ajmar A, Boccardo P, Tonolo FG, Lingua A (2008) Crime scene reconstruction using a fully geomatic approach. Sensors 8:6280–6302
Article Google Scholar
American Bar Association (2013) In: Poje J (ed) ABA Legal technology survey report. Volume VI. Mobile Lawyers
Google Scholar
Bailenson JN, Blasovich J, Beall AC, Noveck B (2006) Courtroom applications of virtual environments, immersive virtual environments and collaborative virtual environments. Law Policy 28(2):249–270
Burton A, Schofield D, Goodwin L (2005) Gates of global perception: forensic graphics for evidence presentation. In: Proceedings of ACM Symposium on Virtual Reality Software and Technology, ACM Press, Singapore, pp 103–111
Chan A (2005) The use of low cost virtual reality and digital technology to aid forensic scene interpretation and recording. Cranfield University PhD Thesis, Cranfield
Dahir VB (2011) Chapter 3: digital visual evidence. 77-112. In: Henderson C, Epstein EJ (eds) The future of evidence: how science and technology will change the practice of law. American Bar association, Chicago
Dunn MA, Salovey P, Feigenson N (2006) The jury persuaded (and not): computer animation in the courtroom. Law Policy 28(2):228–248
Guest G, MacQueen K, Namey E (2012) Applied thematic analysis. Sage, Thousand
Book Google Scholar
Howard TLJ, Murta AD, Gibson S (2000) Virtual environments for scene of crime reconstruction and analysis. In: Proceedings of SPIE - International Society for Optical Engineering, p 3960
Kassin S, Dunn MA (1997) Computer-animated displays and the jury: facilitative and prejudicial effects. Law Hum Behav 21(3):269–281
Krieger R (1992) Sophisticated computer graphics come of age—and evidence will never be the same. J Am Bar Assoc:93–95
Kuehn PF (1999) Maximizing your persuasiveness: effective computer generated exhibits. DCBA Brief J DuPage County Bar Assoc, 12:1999-2000
Lederer FI (1994) Technology comes to the courtroom, and.... . Faculty Publications. Emory Law J 43:1095–1122
Lederer FI, Solomon SH (1997) Courtroom technology – an introduction to the onrushing future. In: Faculty Publications. 1653. Conference Proceedings. Part of the Fifth National Court Technology Conference in Detroit, Michigan
Leveson B (2015) Review of efficiency in criminal proceedings by the Rt Hon Sir Brian Leveson. President of the Queen’s Bench Division. Judiciary of England and Wales
Manker C (2015) Factors contributing to the limited use of information technology in state courtrooms. Thesis. Walden University Scholarworks, p 1416
Manlowe B (2005) Speaker, “use of technology in the courtroom,”. IADC Trial Academy, Stanford University, Palo Alto
Marder NS (2001) Juries and technology: equipping jurors for the twenty-first century. Brook Law Rev 66(4). Article 9):1257–1299
McCracken K (1999) To-scale crime scene models: a great visual aid for the jury. J Forensic Identification 49:130–133
McDougall R (2016) Designing the courtroom of the future. Paper delivered at the international conference on court excellence. 27-29 January. , Singapore
Ministry of Justice. (2013) Press release - Damien Green: ‘digital courtrooms’ to be rolled out nationally. Available at: https://www.gov.uk/government/news/damian-green-digital-courtroomsto - be-rolled-out-nationally
Ministry of Justice (2016). Transforming our justice system. By the Lord Chancellor, the Lord Chief Justice and the Senior President of Tribunals September 2016. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/553261/joint-vision-statement.pdf
Mullins RA (2016) Virtual views: exploring the utility and impact of terrestrial laser scanners in forensics and law. University of Windsor. Electronic Theses and Dissertation Paper. University of Windsor, 5855
Narayanan A, Hibbin S (2001) Can animations be safely used in court? Artif Intell Law 9(4):271–294
National Audit Office (2016) A report by the Comptroller and Auditor General: Efficiency in the Criminal Justice System. Ministry of Justice Available from: https://www.nao.org.uk/wpcontent/uploads/2016/03/Efficiency-in-the-criminal-justice-system.pdf
NSW Attorney General’s Department (2013) Report of the Trial Efficiency Working Group. Crim Law Rev Division Available from: http://www.justice.nsw.gov.au/justicepolicy/Documents/tewg_reportmarch2009.pdf
O’Brien JA, Marakas GM (2010) Management information systems, 10th edn. McGraw-Hill, Boston
Pointe LM (2002) The Michigan cyber court: a bold experiment in the development of the first public virtual courthouse. North Carolina J Law Technol 4(1). Article 5):51–92
Rawson B (2004) The case for the technology-laden courtroom. Courtroom 21 project. Technology White Paper
Reiling D (2010) Technology for justice: how information technology can support judicial reform. Leiden University Press, Reiling, Leiden
Saris WE, Gallhofer IN (2014) Design, evaluation, and analysis of questionnaires for survey research, Wiley series in Survey Methodology, 2nd edn. Wiley, New Jersey
Schofield D (2007) Using graphical technology to present evidence. In: Mason S (ed) Electronic Evidence: Disclosure, Discovery and Admissibility, vol 1, pp 101–121
Schofield D (2009) Animating evidence: computer game technology in the courtroom. J Inf Law Technol 1:1–21
Schofield D (2011) Playing with evidence: using video games in the courtroom. Entertainment Comput 2(1):47–58
Schofield D (2016) The use of computer generated imagery in legal proceedings. Digit Evid Electron Signature Law Rev 13:3–25
Schofield D, Fowle KG (2013) Technology corner visualising forensic data: evidence (part 1). J Digit Forensic Secur Law 8(1):73–90
Selbak J (1994) Digital Litigation: The Prejudicial Effects of Computer-Generated Animation in the Courtroom. High Technol Law J 9(2):337–367
Wiggins EC (2006) The courtroom of the future is here: introduction to emerging technologies in the legal system. Law Policy 28(2):182–191
Download references
The authors wish to thank the participants who took part in this study.
This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors.
Authors and affiliations.
Liverpool John Moores University, Pharmacy and Biomolecular Sciences, James Parsons Building, Byrom Street, Liverpool, L3 3AF, UK
K. Sheppard
Department of Forensic and Crime Sciences, Faculty of Computing, Engineering and Sciences, Science Centre, Staffordshire University, Leek Road, Stoke-on-Trent, Staffordshire, ST4 2DF, UK
S. J. Fieldhouse & J. P. Cassella
You can also search for this author in PubMed Google Scholar
KS collected, analysed and interpreted the participant data regarding the use of technology in the criminal justice system with assistance from SF and JP. All authors were contributors in writing the manuscript and reading and approving the final manuscript.
Correspondence to K. Sheppard .
Ethics approval and consent to participate.
This study was considered using agreed university procedures and was approved by Staffordshire University.
All data collected from the questionnaires was anonymised and it is not possible to identify the individuals who took part in the study through their statements or quotes. Participants were asked to sign a consent form stating that they understand that the data collected during the study would be anonymised prior to any publication.
The authors declare that they have no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Reprints and permissions
Cite this article.
Sheppard, K., Fieldhouse, S.J. & Cassella, J.P. Experiences of evidence presentation in court: an insight into the practice of crime scene examiners in England, Wales and Australia. Egypt J Forensic Sci 10 , 8 (2020). https://doi.org/10.1186/s41935-020-00184-5
Download citation
Received : 19 July 2019
Accepted : 17 February 2020
Published : 02 March 2020
DOI : https://doi.org/10.1186/s41935-020-00184-5
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Meanings of presentation and evidence.
Your browser doesn't support HTML5 audio
(Definition of presentation and evidence from the Cambridge English Dictionary © Cambridge University Press)
Word of the Day
to build a nest, or live in a nest
Worse than or worst of all? How to use the words ‘worse’ and ‘worst’
{{message}}
There was a problem sending your report.
W3C Recommendation 03 March 2022
See also translations .
Copyright © 2022 W3C ® ( MIT , ERCIM , Keio , Beihang ). W3C liability , trademark and permissive document license rules apply.
Credentials are a part of our daily lives; driver's licenses are used to assert that we are capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. This specification provides a mechanism to express these sorts of credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
Comments regarding this specification are welcome at any time, but readers should be aware that the comment period regarding this specific version of the document have ended and the Working Group will not be making substantive modifications to this version of the specification at this stage. Please file issues directly on GitHub , or send them to [email protected] ( subscribe , archives ).
The Working Group has received implementation feedback showing that there are at least two implementations for each normative feature in the specification. The group has obtained reports from fourteen (14) implementations. For details, see the test suite and implementation report .
This document was published by the Verifiable Credentials Working Group as a Recommendation using the Recommendation track .
W3C recommends the wide deployment of this specification as a standard for the Web.
A W3C Recommendation is a specification that, after extensive consensus-building, is endorsed by W3C and its Members, and has commitments from Working Group members to royalty-free licensing for implementations.
This document was produced by a group operating under the W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .
This document is governed by the 2 November 2021 W3C Process Document .
This section is non-normative.
Credentials are a part of our daily lives; driver's licenses are used to assert that we are capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries. These credentials provide benefits to us when used in the physical world, but their use on the Web continues to be elusive.
Currently it is difficult to express education qualifications, healthcare data, financial account details, and other sorts of third-party verified machine-readable personal information on the Web. The difficulty of expressing digital credentials on the Web makes it challenging to receive the same benefits through the Web that physical credentials provide us in the physical world.
This specification provides a standard way to express credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable.
For those unfamiliar with the concepts related to verifiable credentials , the following sections provide an overview of:
In the physical world, a credential might consist of:
A verifiable credential can represent all of the same information that a physical credential represents. The addition of technologies, such as digital signatures, makes verifiable credentials more tamper-evident and more trustworthy than their physical counterparts.
Holders of verifiable credentials can generate verifiable presentations and then share these verifiable presentations with verifiers to prove they possess verifiable credentials with certain characteristics.
Both verifiable credentials and verifiable presentations can be transmitted rapidly, making them more convenient than their physical counterparts when trying to establish trust at a distance.
While this specification attempts to improve the ease of expressing digital credentials , it also attempts to balance this goal with a number of privacy-preserving goals. The persistence of digital information, and the ease with which disparate sources of digital data can be collected and correlated, comprise a privacy concern that the use of verifiable and easily machine-readable credentials threatens to make worse. This document outlines and attempts to address a number of these issues in Section 7. Privacy Considerations . Examples of how to use this data model using privacy-enhancing technologies, such as zero-knowledge proofs, are also provided throughout this document.
The word "verifiable" in the terms verifiable credential and verifiable presentation refers to the characteristic of a credential or presentation as being able to be verified by a verifier , as defined in this document. Verifiability of a credential does not imply that the truth of claims encoded therein can be evaluated; however, the issuer can include values in the evidence property to help the verifier apply their business logic to determine whether the claims have sufficient veracity for their needs.
This section describes the roles of the core actors and the relationships between them in an ecosystem where verifiable credentials are expected to be useful. A role is an abstraction that might be implemented in many different ways. The separation of roles suggests likely interfaces and protocols for standardization. The following roles are introduced in this specification:
Figure 1 above provides an example ecosystem in which to ground the rest of the concepts in this specification. Other ecosystems exist, such as protected environments or proprietary systems, where verifiable credentials also provide benefit.
The Verifiable Credentials Use Cases document [ VC-USE-CASES ] outlines a number of key topics that readers might find useful, including:
As a result of documenting and analyzing the use cases document, the following desirable ecosystem characteristics were identified for this specification:
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY , MUST , MUST NOT , RECOMMENDED , and SHOULD in this document are to be interpreted as described in BCP 14 [ RFC2119 ] [ RFC8174 ] when, and only when, they appear in all capitals, as shown here.
A conforming document is any concrete expression of the data model that complies with the normative statements in this specification. Specifically, all relevant normative statements in Sections 4. Basic Concepts , 5. Advanced Concepts , and 6. Syntaxes of this document MUST be enforced. A serialization format for the conforming document MUST be deterministic, bi-directional, and lossless as described in Section 6. Syntaxes . The conforming document MAY be transmitted or stored in any such serialization format.
A conforming processor is any algorithm realized as software and/or hardware that generates or consumes a conforming document . Conforming processors MUST produce errors when non-conforming documents are consumed.
This specification makes no normative statements with regard to the conformance of roles in the ecosystem, such as issuers , holders , or verifiers , because the conformance of ecosystem roles are highly application, use case, and market vertical specific.
Digital proof mechanisms, a subset of which are digital signatures, are required to ensure the protection of a verifiable credential . Having and validating proofs, which may be dependent on the syntax of the proof (for example, using the JSON Web Signature of a JSON Web Token for proofing a key holder), are an essential part of processing a verifiable credential . At the time of publication, Working Group members had implemented verifiable credentials using at least three proof mechanisms:
Implementers are advised to note that not all proof mechanisms are standardized as of the publication date of this specification. The group expects some of these mechanisms, as well as new ones, to mature independently and become standardized in time. Given there are multiple valid proof mechanisms, this specification does not standardize on any single digital signature mechanism. One of the goals of this specification is to provide a data model that can be protected by a variety of current and future digital proof mechanisms. Conformance to this specification does not depend on the details of a particular proof mechanism; it requires clearly identifying the mechanism a verifiable credential uses.
This document also contains examples that contain JSON and JSON-LD content. Some of these examples contain characters that are invalid JSON, such as inline comments ( // ) and the use of ellipsis ( ... ) to denote information that adds little value to the example. Implementers are cautioned to remove this content if they desire to use the information as valid JSON or JSON-LD.
The following terms are used to describe concepts in this specification.
The following sections outline core data model concepts, such as claims , credentials , and presentations , which form the foundation of this specification.
A claim is a statement about a subject . A subject is a thing about which claims can be made. Claims are expressed using subject - property - value relationships.
The data model for claims , illustrated in Figure 2 above, is powerful and can be used to express a large variety of statements. For example, whether someone graduated from a particular university can be expressed as shown in Figure 3 below.
Individual claims can be merged together to express a graph of information about a subject . The example shown in Figure 4 below extends the previous claim by adding the claims that Pat knows Sam and that Sam is employed as a professor.
To this point, the concepts of a claim and a graph of information are introduced. To be able to trust claims , more information is expected to be added to the graph.
A credential is a set of one or more claims made by the same entity . Credentials might also include an identifier and metadata to describe properties of the credential , such as the issuer , the expiry date and time, a representative image, a public key to use for verification purposes, the revocation mechanism, and so on. The metadata might be signed by the issuer . A verifiable credential is a set of tamper-evident claims and metadata that cryptographically prove who issued it.
Examples of verifiable credentials include digital employee identification cards, digital birth certificates, and digital educational certificates.
Credential identifiers are often used to identify specific instances of a credential . These identifiers can also be used for correlation. A holder wanting to minimize correlation is advised to use a selective disclosure scheme that does not reveal the credential identifier.
Figure 5 above shows the basic components of a verifiable credential , but abstracts the details about how claims are organized into information graphs , which are then organized into verifiable credentials . Figure 6 below shows a more complete depiction of a verifiable credential , which is normally composed of at least two information graphs . The first graph expresses the verifiable credential itself, which contains credential metadata and claims . The second graph expresses the digital proof, which is usually a digital signature.
It is possible to have a credential , such as a marriage certificate, containing multiple claims about different subjects that are not required to be related.
It is possible to have a credential that does not contain any claims about the entity to which the credential was issued. For example, a credential that only contains claims about a specific dog, but is issued to its owner.
Enhancing privacy is a key design feature of this specification. Therefore, it is important for entities using this technology to be able to express only the portions of their persona that are appropriate for a given situation. The expression of a subset of one's persona is called a verifiable presentation . Examples of different personas include a person's professional persona, their online gaming persona, their family persona, or an incognito persona.
A verifiable presentation expresses data from one or more verifiable credentials , and is packaged in such a way that the authorship of the data is verifiable . If verifiable credentials are presented directly, they become verifiable presentations . Data formats derived from verifiable credentials that are cryptographically verifiable , but do not of themselves contain verifiable credentials , might also be verifiable presentations .
The data in a presentation is often about the same subject , but might have been issued by multiple issuers . The aggregation of this information typically expresses an aspect of a person, organization, or entity .
Figure 7 above shows the components of a verifiable presentation , but abstracts the details about how verifiable credentials are organized into information graphs , which are then organized into verifiable presentations .
Figure 8 below shows a more complete depiction of a verifiable presentation , which is normally composed of at least four information graphs . The first of these information graphs , the Presentation Graph , expresses the verifiable presentation itself, which contains presentation metadata. The verifiableCredential property in the Presentation Graph refers to one or more verifiable credentials , each being one of the second information graphs , i.e., a self-contained Credential Graph , which in turn contains credential metadata and claims. The third information graph , the Credential Proof Graph , expresses the credential graph proof, which is usually a digital signature. The fourth information graph , the Presentation Proof Graph , expresses the presentation graph proof, which is usually a digital signature.
It is possible to have a presentation , such as a business persona, which draws on multiple credentials about different subjects that are often, but not required to be, related.
The previous sections introduced the concepts of claims , verifiable credentials , and verifiable presentations using graphical depictions. This section provides a concrete set of simple but complete lifecycle examples of the data model expressed in one of the concrete syntaxes supported by this specification. The lifecycle of credentials and presentations in the Verifiable Credentials Ecosystem often take a common path:
To illustrate this lifecycle, we will use the example of redeeming an alumni discount from a university. In the example below, Pat receives an alumni verifiable credential from a university, and Pat stores the verifiable credential in a digital wallet.
Pat then attempts to redeem the alumni discount. The verifier , a ticket sales system, states that any alumni of "Example University" receives a discount on season tickets to sporting events. Using a mobile device, Pat starts the process of purchasing a season ticket. A step in this process requests an alumni verifiable credential , and this request is routed to Pat's digital wallet. The digital wallet asks Pat if they would like to provide a previously issued verifiable credential . Pat selects the alumni verifiable credential , which is then composed into a verifiable presentation . The verifiable presentation is sent to the verifier and verified .
Implementers that are interested in understanding more about the proof mechanism used above can learn more in Section 4.7 Proofs (Signatures) and by reading the following specifications: Data Integrity [ DATA-INTEGRITY ], Linked Data Cryptographic Suites Registry [ LDP-REGISTRY ], and JSON Web Signature (JWS) Unencoded Payload Option [ RFC7797 ]. A list of proof mechanisms is available in the Verifiable Credentials Extension Registry [ VC-EXTENSION-REGISTRY ].
This section introduces some basic concepts for the specification, in preparation for Section 5. Advanced Concepts later in the document.
When two software systems need to exchange data, they need to use terminology that both systems understand. As an analogy, consider how two people communicate. Both people must use the same language and the words they use must mean the same thing to each other. This might be referred to as the context of a conversation .
Verifiable credentials and verifiable presentations have many attributes and values that are identified by URIs [ RFC3986 ]. However, those URIs can be long and not very human-friendly. In such cases, short-form human-friendly aliases can be more helpful. This specification uses the @context property to map such short-form aliases to the URIs required by specific verifiable credentials and verifiable presentations .
In JSON-LD, the @context property can also be used to communicate other details, such as datatype information, language information, transformation rules, and so on, which are beyond the needs of this specification, but might be useful in the future or to related work. For more information, see Section 3.1: The Context of the [ JSON-LD ] specification.
Verifiable credentials and verifiable presentations MUST include a @context property .
Though this specification requires that a @context property be present, it is not required that the value of the @context property be processed using JSON-LD. This is to support processing using plain JSON libraries, such as those that might be used when the verifiable credential is encoded as a JWT. All libraries or processors MUST ensure that the order of the values in the @context property is what is expected for the specific application. Libraries or processors that support JSON-LD can process the @context property using full JSON-LD processing as expected.
The example above uses the base context URI ( https://www.w3.org/2018/credentials/v1 ) to establish that the conversation is about a verifiable credential . The second URI ( https://www.w3.org/2018/credentials/examples/v1 ) establishes that the conversation is about examples.
This document uses the example context URI ( https://www.w3.org/2018/credentials/examples/v1 ) for the purpose of demonstrating examples. Implementations are expected to not use this URI for any other purpose, such as in pilot or production systems.
The data available at https://www.w3.org/2018/credentials/v1 is a static document that is never updated and SHOULD be downloaded and cached. The associated human-readable vocabulary document for the Verifiable Credentials Data Model is available at https://www.w3.org/2018/credentials/ . This concept is further expanded on in Section 5.3 Extensibility .
When expressing statements about a specific thing, such as a person, product, or organization, it is often useful to use some kind of identifier so that others can express statements about the same thing. This specification defines the optional id property for such identifiers. The id property is intended to unambiguously refer to an object, such as a person, product, or organization. Using the id property allows for the expression of statements about specific things in the verifiable credential .
If the id property is present:
Developers should remember that identifiers might be harmful in scenarios where pseudonymity is required. Developers are encouraged to read Section 7.3 Identifier-Based Correlation carefully when considering such scenarios. There are also other types of correlation mechanisms documented in Section 7. Privacy Considerations that create privacy concerns. Where privacy is a strong consideration, the id property MAY be omitted.
The example above uses two types of identifiers. The first identifier is for the verifiable credential and uses an HTTP-based URL. The second identifier is for the subject of the verifiable credential (the thing the claims are about) and uses a decentralized identifier , also known as a DID .
As of this publication, DIDs are a new type of identifier that are not necessary for verifiable credentials to be useful. Specifically, verifiable credentials do not depend on DIDs and DIDs do not depend on verifiable credentials . However, it is expected that many verifiable credentials will use DIDs and that software libraries implementing this specification will probably need to resolve DIDs . DID -based URLs are used for expressing identifiers associated with subjects , issuers , holders , credential status lists, cryptographic keys, and other machine-readable information associated with a verifiable credential .
Software systems that process the kinds of objects specified in this document use type information to determine whether or not a provided verifiable credential or verifiable presentation is appropriate. This specification defines a type property for the expression of type information.
Verifiable credentials and verifiable presentations MUST have a type property . That is, any credential or presentation that does not have type property is not verifiable , so is neither a verifiable credential nor a verifiable presentation .
With respect to this specification, the following table lists the objects that MUST have a type specified.
Object | Type |
---|---|
object (a subclass of a object) | and, optionally, a more specific . For example, |
object | and, optionally, a more specific . For example, |
object (a subclass of a object) | and, optionally, a more specific . For example, |
object | and, optionally, a more specific . For example, |
object | A valid proof . For example, |
object | A valid status . For example, |
object | A valid terms of use . For example, ) |
object | A valid evidence . For example, |
The type system for the Verifiable Credentials Data Model is the same as for [ JSON-LD ] and is detailed in Section 5.4: Specifying the Type and Section 8: JSON-LD Grammar . When using a JSON-LD context (see Section 5.3 Extensibility ), this specification aliases the @type keyword to type to make the JSON-LD documents more easily understood. While application developers and document authors do not need to understand the specifics of the JSON-LD type system, implementers of this specification who want to support interoperable extensibility, do.
All credentials , presentations , and encapsulated objects MUST specify, or be associated with, additional more narrow types (like UniversityDegreeCredential , for example) so software systems can process this additional information.
When processing encapsulated objects defined in this specification, (for example, objects associated with the credentialSubject object or deeply nested therein), software systems SHOULD use the type information specified in encapsulating objects higher in the hierarchy. Specifically, an encapsulating object, such as a credential , SHOULD convey the associated object types so that verifiers can quickly determine the contents of an associated object based on the encapsulating object type .
For example, a credential object with the type of UniversityDegreeCredential , signals to a verifier that the object associated with the credentialSubject property contains the identifier for the:
This enables implementers to rely on values associated with the type property for verification purposes. The expectation of types and their associated properties should be documented in at least a human-readable specification, and preferably, in an additional machine-readable representation.
The type system used in the data model described in this specification allows for multiple ways to associate types with data. Implementers and authors are urged to read the section on typing in the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ].
A verifiable credential contains claims about one or more subjects . This specification defines a credentialSubject property for the expression of claims about one or more subjects .
A verifiable credential MUST have a credentialSubject property .
It is possible to express information related to multiple subjects in a verifiable credential . The example below specifies two subjects who are spouses. Note the use of array notation to associate multiple subjects with the credentialSubject property.
This specification defines a property for expressing the issuer of a verifiable credential .
A verifiable credential MUST have an issuer property .
It is also possible to express additional information about the issuer by associating an object with the issuer property:
The value of the issuer property can also be a JWK (for example, "https://example.com/keys/foo.jwk" ) or a DID (for example, "did:example:abfe13f712120431c276e12ecab" ).
This specification defines the issuanceDate property for expressing the date and time when a credential becomes valid.
It is expected that the next version of this specification will add the validFrom property and will deprecate the issuanceDate property in favor of a new issued property . The range of values for both properties are expected to remain as [ XMLSCHEMA11-2 ] combined date-time strings. Implementers are advised that the validFrom and issued properties are reserved and use for any other purpose is discouraged.
At least one proof mechanism, and the details necessary to evaluate that proof, MUST be expressed for a credential or presentation to be a verifiable credential or verifiable presentation ; that is, to be verifiable .
This specification identifies two classes of proof mechanisms: external proofs and embedded proofs. An external proof is one that wraps an expression of this data model, such as a JSON Web Token, which is elaborated on in Section 6.3.1 JSON Web Token . An embedded proof is a mechanism where the proof is included in the data, such as a Linked Data Signature, which is elaborated upon in Section 6.3.2 Data Integrity Proofs .
When embedding a proof, the proof property MUST be used.
Because the method used for a mathematical proof varies by representation language and the technology used, the set of name-value pairs that is expected as the value of the proof property will vary accordingly. For example, if digital signatures are used for the proof mechanism, the proof property is expected to have name-value pairs that include a signature, a reference to the signing entity, and a representation of the signing date. The example below uses RSA digital signatures.
As discussed in Section 1.4 Conformance , there are multiple viable proof mechanisms, and this specification does not standardize nor recommend any single proof mechanism for use with verifiable credentials . For more information about the proof mechanism, see the following specifications: Data Integrity [ DATA-INTEGRITY ], Linked Data Cryptographic Suites Registries [ LDP-REGISTRY ], and JSON Web Signature (JWS) Unencoded Payload Option [ RFC7797 ]. A list of proof mechanisms is available in the Verifiable Credentials Extension Registry [ VC-EXTENSION-REGISTRY ].
This specification defines the expirationDate property for the expression of credential expiration information.
It is expected that the next version of this specification will add the validUntil property in a way that deprecates, but preserves backwards compatibility with the expirationDate property . Implementers are advised that the validUntil property is reserved and its use for any other purpose is discouraged.
This specification defines the following credentialStatus property for the discovery of information about the current status of a verifiable credential , such as whether it is suspended or revoked.
The precise contents of the credential status information is determined by the specific credentialStatus type definition, and varies depending on factors such as whether it is simple to implement or if it is privacy-enhancing.
Defining the data model, formats, and protocols for status schemes are out of scope for this specification. A Verifiable Credential Extension Registry [ VC-EXTENSION-REGISTRY ] exists that contains available status schemes for implementers who want to implement verifiable credential status checking.
Presentations MAY be used to combine and present credentials . They can be packaged in such a way that the authorship of the data is verifiable . The data in a presentation is often all about the same subject , but there is no limit to the number of subjects or issuers in the data. The aggregation of information from multiple verifiable credentials is a typical use of verifiable presentations .
A verifiable presentation is typically composed of the following properties:
The example below shows a verifiable presentation that embeds verifiable credentials .
The contents of the verifiableCredential property shown above are verifiable credentials , as described by this specification. The contents of the proof property are proofs, as described by the Data Integrity [ DATA-INTEGRITY ] specification. An example of a verifiable presentation using the JWT proof mechanism is given in section 6.3.1 JSON Web Token .
Some zero-knowledge cryptography schemes might enable holders to indirectly prove they hold claims from a verifiable credential without revealing the verifiable credential itself. In these schemes, a claim from a verifiable credential might be used to derive a presented value, which is cryptographically asserted such that a verifier can trust the value if they trust the issuer .
For example, a verifiable credential containing the claim date of birth might be used to derive the presented value over the age of 15 in a manner that is cryptographically verifiable . That is, a verifier can still trust the derived value if they trust the issuer .
For an example of a ZKP-style verifiable presentation containing derived data instead of directly embedded verifiable credentials , see Section 5.8 Zero-Knowledge Proofs .
Selective disclosure schemes using zero-knowledge proofs can use claims expressed in this model to prove additional statements about those claims . For example, a claim specifying a subject's date of birth can be used as a predicate to prove the subject's age is within a given range, and therefore prove the subject qualifies for age-related discounts, without actually revealing the subject's birthdate. The holder has the flexibility to use the claim in any way that is applicable to the desired verifiable presentation .
Building on the concepts introduced in Section 4. Basic Concepts , this section explores more complex topics about verifiable credentials .
Section 1.2 Ecosystem Overview provided an overview of the verifiable credential ecosystem. This section provides more detail about how the ecosystem is envisaged to operate.
The roles and information flows in the verifiable credential ecosystem are as follows:
The order of the actions above is not fixed, and some actions might be taken more than once. Such action-recurrence might be immediate or at any later point.
The most common sequence of actions is envisioned to be:
This specification does not define any protocol for transferring verifiable credentials or verifiable presentations , but assuming other specifications do specify how they are transferred between entities, then this Verifiable Credential Data Model is directly applicable.
This specification also does not define an authorization framework nor the decisions that a verifier might make after verifying a verifiable credential or verifiable presentation , taking into account the holder , the issuers of the verifiable credentials , the contents of the verifiable credentials , and its own policies.
In particular, Sections 5.6 Terms of Use and C. Subject-Holder Relationships specify how a verifier can determine:
The verifiable credentials trust model is as follows:
This trust model differentiates itself from other trust models by ensuring the:
By decoupling the trust between the identity provider and the relying party a more flexible and dynamic trust model is created such that market competition and customer choice is increased.
For more information about how this trust model interacts with various threat models studied by the Working Group, see the Verifiable Credentials Use Cases document [ VC-USE-CASES ].
The data model detailed in this specification does not imply a transitive trust model, such as that provided by more traditional Certificate Authority trust models. In the Verifiable Credentials Data Model, a verifier either directly trusts or does not trust an issuer . While it is possible to build transitive trust models using the Verifiable Credentials Data Model, implementers are urged to learn about the security weaknesses introduced by broadly delegating trust in the manner adopted by Certificate Authority systems.
One of the goals of the Verifiable Credentials Data Model is to enable permissionless innovation. To achieve this, the data model needs to be extensible in a number of different ways. The data model is required to:
This approach to data modeling is often called an open world assumption , meaning that any entity can say anything about any other entity. While this approach seems to conflict with building simple and predictable software systems, balancing extensibility with program correctness is always more challenging with an open world assumption than with closed software systems.
The rest of this section describes, through a series of examples, how both extensibility and program correctness are achieved.
Let us assume we start with the verifiable credential shown below.
This verifiable credential states that the entity associated with did:example:abcdef1234567 has a name with a value of Jane Doe .
Now let us assume a developer wants to extend the verifiable credential to store two additional pieces of information: an internal corporate reference number, and Jane's favorite food.
The first thing to do is to create a JSON-LD context containing two new terms, as shown below.
After this JSON-LD context is created, the developer publishes it somewhere so it is accessible to verifiers who will be processing the verifiable credential . Assuming the above JSON-LD context is published at https://example.com/contexts/mycontext.jsonld , we can extend this example by including the context and adding the new properties and credential type to the verifiable credential .
This example demonstrates extending the Verifiable Credentials Data Model in a permissionless and decentralized way. The mechanism shown also ensures that verifiable credentials created in this way provide a mechanism to prevent namespace conflicts and semantic ambiguity.
A dynamic extensibility model such as this does increase the implementation burden. Software written for such a system has to determine whether verifiable credentials with extensions are acceptable based on the risk profile of the application. Some applications might accept only certain extensions while highly secure environments might not accept any extensions. These decisions are up to the developers of these applications and are specifically not the domain of this specification.
Developers are urged to ensure that extension JSON-LD contexts are highly available. Implementations that cannot fetch a context will produce an error. Strategies for ensuring that extension JSON-LD contexts are always available include using content-addressed URLs for contexts, bundling context documents with implementations, or enabling aggressive caching of contexts.
Implementers are advised to pay close attention to the extension points in this specification, such as in Sections 4.7 Proofs (Signatures) , 4.9 Status , 5.4 Data Schemas , 5.5 Refreshing , 5.6 Terms of Use , and 5.7 Evidence . While this specification does not define concrete implementations for those extension points, the Verifiable Credentials Extension Registry [ VC-EXTENSION-REGISTRY ] provides an unofficial, curated list of extensions that developers can use from these extension points.
This specification ensures that "plain" JSON and JSON-LD syntaxes are semantically compatible without requiring JSON implementations to use a JSON-LD processor. To achieve this, the specification imposes the following additional requirements on both syntaxes:
A human-readable document describing the expected order of values for the @context property is expected to be published by any implementer seeking interoperability. A machine-readable description (that is, a normal JSON-LD Context document) is expected to be published at the URL specified in the @context property by JSON-LD implementers seeking interoperability.
The requirements above guarantee semantic interoperability between JSON and JSON-LD for terms defined by the @context mechanism. While JSON-LD processors will use the specific mechanism provided and can verify that all terms are correctly specified, JSON-based processors implicitly accept the same set of terms without testing that they are correct. In other words, the context in which the data exchange happens is explicitly stated for both JSON and JSON-LD by using the same mechanism. With respect to JSON-based processors, this is achieved in a lightweight manner, without having to use JSON-LD processing libraries.
Data schemas are useful when enforcing a specific structure on a given collection of data. There are at least two types of data schemas that this specification considers:
It is important to understand that data schemas serve a different purpose from the @context property, which neither enforces data structure or data syntax, nor enables the definition of arbitrary encodings to alternate representation formats.
This specification defines the following property for the expression of a data schema, which can be included by an issuer in the verifiable credentials that it issues:
The credentialSchema property provides an opportunity to annotate type definitions or lock them to specific versions of the vocabulary. Authors of verifiable credentials can include a static version of their vocabulary using credentialSchema that is locked to some content integrity protection mechanism. The credentialSchema property also makes it possible to perform syntactic checking on the credential and to use verification mechanisms such as JSON Schema [ JSON-SCHEMA-2018 ] validation.
In the example above, the issuer is specifying a credentialSchema , which points to a [ JSON-SCHEMA-2018 ] file that can be used by a verifier to determine if the verifiable credential is well formed.
For information about linkages to JSON Schema [ JSON-SCHEMA-2018 ] or other optional verification mechanisms, see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.
Data schemas can also be used to specify mappings to other binary formats, such as those used to perform zero-knowledge proofs. For more information on using the credentialSchema property with zero-knowledge proofs, see Section 5.8 Zero-Knowledge Proofs .
In the example above, the issuer is specifying a credentialSchema pointing to a zero-knowledge packed binary data format that is capable of transforming the input data into a format, which can then be used by a verifier to determine if the proof provided with the verifiable credential is valid.
It is useful for systems to enable the manual or automatic refresh of an expired verifiable credential . For more information about expired verifiable credentials , see Section 4.8 Expiration . This specification defines a refreshService property , which enables an issuer to include a link to a refresh service.
The issuer can include the refresh service as an element inside the verifiable credential if it is intended for either the verifier or the holder (or both), or inside the verifiable presentation if it is intended for the holder only. In the latter case, this enables the holder to refresh the verifiable credential before creating a verifiable presentation to share with a verifier . In the former case, including the refresh service inside the verifiable credential enables either the holder or the verifier to perform future updates of the credential .
The refresh service is only expected to be used when either the credential has expired or the issuer does not publish credential status information. Issuers are advised not to put the refreshService property in a verifiable credential that does not contain public information or whose refresh service is not protected in some way.
Placing a refreshService property in a verifiable credential so that it is available to verifiers can remove control and consent from the holder and allow the verifiable credential to be issued directly to the verifier , thereby bypassing the holder .
In the example above, the issuer specifies a manual refreshService that can be used by directing the holder or the verifier to https://example.edu/refresh/3732 .
Terms of use can be utilized by an issuer or a holder to communicate the terms under which a verifiable credential or verifiable presentation was issued. The issuer places their terms of use inside the verifiable credential . The holder places their terms of use inside a verifiable presentation . This specification defines a termsOfUse property for expressing terms of use information.
The value of the termsOfUse property tells the verifier what actions it is required to perform (an obligation ), not allowed to perform (a prohibition ), or allowed to perform (a permission ) if it is to accept the verifiable credential or verifiable presentation .
Further study is required to determine how a subject who is not a holder places terms of use on their verifiable credentials . One way could be for the subject to request the issuer to place the terms of use inside the issued verifiable credentials . Another way could be for the subject to delegate a verifiable credential to a holder and place terms of use restrictions on the delegated verifiable credential .
In the example above, the issuer (the assigner ) is prohibiting verifiers (the assignee ) from storing the data in an archive.
Warning: The termsOfUse property is improperly defined within the VerifiablePresentation scoped context. This is a bug with the version 1 context and will be fixed in the version 2 context. In the meantime, implementors who wish to use this feature will be required to extend the context of their verifiable presentation with an additional term that defines the termsOfUse property, which can then be used alongside the verifiable presentation type property, in order for the term to be semantically recognized in a JSON-LD processor.
In the example above, the holder (the assigner ), who is also the subject , expressed a term of use prohibiting the verifier (the assignee , https://wineonline.example.org ) from using the information provided to correlate the holder or subject using a third-party service. If the verifier were to use a third-party service for correlation, they would violate the terms under which the holder created the presentation .
This feature is also expected to be used by government-issued verifiable credentials to instruct digital wallets to limit their use to similar government organizations in an attempt to protect citizens from unexpected usage of sensitive data. Similarly, some verifiable credentials issued by private industry are expected to limit usage to within departments inside the organization, or during business hours. Implementers are urged to read more about this rapidly evolving feature in the appropriate section of the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.
Evidence can be included by an issuer to provide the verifier with additional supporting information in a verifiable credential . This could be used by the verifier to establish the confidence with which it relies on the claims in the verifiable credential .
For example, an issuer could check physical documentation provided by the subject or perform a set of background checks before issuing the credential . In certain scenarios, this information is useful to the verifier when determining the risk associated with relying on a given credential .
This specification defines the evidence property for expressing evidence information.
For information about how attachments and references to credentials and non-credential data might be supported by the specification, see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.
In this evidence example, the issuer is asserting that they physically matched the subject of the credential to a physical copy of a driver's license with the stated license number. This driver's license was used in the issuance process to verify that "Example University" verified the subject before issuance of the credential and how they did so (physical verification).
The evidence property provides different and complementary information to the proof property . The evidence property is used to express supporting information, such as documentary evidence, related to the integrity of the verifiable credential . In contrast, the proof property is used to express machine-verifiable mathematical proofs related to the authenticity of the issuer and integrity of the verifiable credential . For more information about the proof property , see Section 4.7 Proofs (Signatures) .
A zero-knowledge proof is a cryptographic method where an entity can prove to another entity that they know a certain value without disclosing the actual value. A real-world example is proving that an accredited university has granted a degree to you without revealing your identity or any other personally identifiable information contained on the degree.
The key capabilities introduced by zero-knowledge proof mechanisms are the ability of a holder to:
This specification describes a data model that supports selective disclosure with the use of zero-knowledge proof mechanisms. The examples below highlight how the data model can be used to issue, present, and verify zero-knowledge verifiable credentials .
For a holder to use a zero-knowledge verifiable presentation , they need an issuer to have issued a verifiable credential in a manner that enables the holder to derive a proof from the originally issued verifiable credential , so that the holder can present the information to a verifier in a privacy-enhancing manner. This implies that the holder can prove the validity of the issuer's signature without revealing the values that were signed, or when only revealing certain selected values. The standard practice is to do so by proving knowledge of the signature, without revealing the signature itself. There are two requirements for verifiable credentials when they are to be used in zero-knowledge proof systems.
The following example shows one method of using verifiable credentials in zero-knowledge. It makes use of a Camenisch-Lysyanskaya Signature [ CL-SIGNATURES ], which allows the presentation of the verifiable credential in a way that supports the privacy of the holder and subject through the use of selective disclosure of the verifiable credential values. Some other cryptographic systems which rely upon zero-knowledge proofs to selectively disclose attributes can be found in the [ LDP-REGISTRY ] as well.
The example above provides the verifiable credential definition by using the credentialSchema property and a specific proof that is usable in the Camenisch-Lysyanskaya Zero-Knowledge Proof system.
The next example utilizes the verifiable credential above to generate a new derived verifiable credential with a privacy-preserving proof. The derived verifiable credential is then placed in a verifiable presentation , so that the verifiable credential discloses only the claims and additional credential metadata that the holder intended. To do this, all of the following requirements are expected to be met:
Important details regarding the format for the credential definition and of the proofs are omitted on purpose because they are outside of the scope of this document. The purpose of this section is to guide implementers who want to extend verifiable credentials and verifiable presentations to support zero-knowledge proof systems.
There are at least two different cases to consider for an entity wanting to dispute a credential issued by an issuer :
The mechanism for issuing a DisputeCredential is the same as for a regular credential except that the credentialSubject identifier in the DisputeCredential property is the identifier of the disputed credential .
For example, if a credential with an identifier of https://example.org/credentials/245 is disputed, the subject can issue the credential shown below and present it to the verifier along with the disputed credential .
In the above verifiable credential the issuer is claiming that the address in the disputed verifiable credential is wrong.
If a credential does not have an identifier, a content-addressed identifier can be used to identify the disputed credential . Similarly, content-addressed identifiers can be used to uniquely identify individual claims.
This area of study is rapidly evolving and developers that are interested in publishing credentials that dispute the veracity of other credentials are urged to read the section related to disputes in the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.
Verifiable credentials are intended as a means of reliably identifying subjects . While it is recognized that Role Based Access Controls (RBACs) and Attribute Based Access Controls (ABACs) rely on this identification as a means of authorizing subjects to access resources, this specification does not provide a complete solution for RBAC or ABAC. Authorization is not an appropriate use for this specification without an accompanying authorization framework.
The Working Group did consider authorization use cases during the creation of this specification and is pursuing that work as an architectural layer built on top of this specification.
The data model as described in Sections 3. Core Data Model , 4. Basic Concepts , and 5. Advanced Concepts is the canonical structural representation of a verifiable credential or verifiable presentation . All serializations are representations of that data model in a specific format. This section specifies how the data model is realized in JSON-LD and plain JSON. Although syntactic mappings are provided for only these two syntaxes, applications and services can use any other data representation syntax (such as XML, YAML, or CBOR) that is capable of expressing the data model. As the verification and validation requirements are defined in terms of the data model, all serialization syntaxes have to be deterministically translated to the data model for processing, validation , or comparison. This specification makes no requirements for support of any specific serialization format.
The expected arity of the property values in this specification, and the resulting datatype which holds those values, can vary depending on the property. If present, the following properties are represented as a single value:
All other properties, if present, are represented as either a single value or an array of values.
The data model, as described in Section 3. Core Data Model , can be encoded in JavaScript Object Notation (JSON) [ RFC8259 ] by mapping property values to JSON types as follows:
As the transformations listed herein have potentially incompatible interpretations, additional profiling of the JSON format is required to provide a deterministic transformation to the data model.
[ JSON-LD ] is a JSON-based format used to serialize Linked Data . The syntax is designed to easily integrate into deployed systems already using JSON, and provides a smooth upgrade path from JSON to [ JSON-LD ]. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.
[ JSON-LD ] is useful when extending the data model described in this specification. Instances of the data model are encoded in [ JSON-LD ] in the same way they are encoded in JSON (Section 6.1 JSON ), with the addition of the @context property . The JSON-LD context is described in detail in the [ JSON-LD ] specification and its use is elaborated on in Section 5.3 Extensibility .
Multiple contexts MAY be used or combined to express any arbitrary information about verifiable credentials in idiomatic JSON. The JSON-LD context , available at https://www.w3.org/2018/credentials/v1 , is a static document that is never updated and can therefore be downloaded and cached client side. The associated vocabulary document for the Verifiable Credentials Data Model is available at https://www.w3.org/2018/credentials .
In general, the data model and syntaxes described in this document are designed such that developers can copy and paste examples to incorporate verifiable credentials into their software systems. The design goal of this approach is to provide a low barrier to entry while still ensuring global interoperability between a heterogeneous set of software systems. This section describes some of these approaches, which will likely go unnoticed by most developers, but whose details will be of interest to implementers. The most noteworthy syntactic sugars provided by [ JSON-LD ] are:
The data model described in this specification is designed to be proof format agnostic. This specification does not normatively require any particular digital proof or signature format. While the data model is the canonical representation of a credential or presentation , the proofing mechanisms for these are often tied to the syntax used in the transmission of the document between parties. As such, each proofing mechanism has to specify whether the verification of the proof is calculated against the state of the document as transmitted, against the possibly transformed data model, or against another form. At the time of publication, at least two proof formats are being actively utilized by implementers and the Working Group felt that documenting what these proof formats are and how they are being used would be beneficial to implementers. The sections detailing the current proof formats being actively utilized to issue verifiable credentials are:
JSON Web Token (JWT) [ RFC7519 ] is still a widely used means to express claims to be transferred between two parties. Providing a representation of the Verifiable Credentials Data Model for JWT allows existing systems and libraries to participate in the ecosystem described in Section 1.2 Ecosystem Overview . A JWT encodes a set of claims as a JSON object that is contained in a JSON Web Signature (JWS) [ RFC7515 ] or JWE [ RFC7516 ]. For this specification, the use of JWE is out of scope.
This specification defines encoding rules of the Verifiable Credential Data Model onto JWT and JWS. It further defines processing rules how and when to make use of specific JWT-registered claim names and specific JWS-registered header parameter names to allow systems based on JWT to comply with this specification. If these specific claim names and header parameters are present, their respective counterpart in the standard verifiable credential and verifiable presentation MAY be omitted to avoid duplication.
This specification introduces two new registered claim names, which contain those parts of the standard verifiable credentials and verifiable presentations where no explicit encoding rules for JWT exist. These objects are enclosed in the JWT payload as follows:
Jwt encoding.
To encode a verifiable credential as a JWT, specific properties introduced by this specification MUST be either:
If no explicit rule is specified, properties are encoded in the same way as with a standard credential , and are added to the vc claim of the JWT. As with all JWTs, the JWS-based signature of a verifiable credential represented in the JWT syntax is calculated against the literal JWT string value as presented across the wire, before any decoding or transformation rules are applied. The following paragraphs describe these encoding rules.
If a JWS is present, the digital signature refers either to the issuer of the verifiable credential , or in the case of a verifiable presentation , to the holder of the verifiable credential . The JWS proves that the iss of the JWT signed the contained JWT payload and therefore, the proof property can be omitted.
If no JWS is present, a proof property MUST be provided. The proof property can be used to represent a more complex proof, as may be necessary if the creator is different from the issuer , or a proof not based on digital signatures, such as Proof of Work. The issuer MAY include both a JWS and a proof property . For backward compatibility reasons, the issuer MUST use JWS to represent proofs based on a digital signature.
The following rules apply to JOSE headers in the context of this specification:
For backward compatibility with JWT processors, the following registered JWT claim names MUST be used, instead of or in addition to, their respective standard verifiable credential counterparts:
In bearer credentials and presentations , sub will not be present.
Other JOSE header parameters and JWT claim names not specified herein can be used if their use is not explicitly discouraged. Additional verifiable credential claims MUST be added to the credentialSubject property of the JWT.
For more information about using JOSE header parameters and/or JWT claim names not specified herein, see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.
This version of the specification defines no JWT-specific encoding rules for the concepts outlined in Section Advanced Concepts (for example, refreshService , termsOfUse , and evidence ). These concepts can be encoded as they are without any transformation, and can be added to the vc JWT claim .
Implementers are warned that JWTs are not capable of encoding multiple subjects and are thus not capable of encoding a verifiable credential with more than one subject . JWTs might support multiple subjects in the future and implementers are advised to refer to the JSON Web Token Claim Registry for multi-subject JWT claim names or the Nested JSON Web Token specification.
To decode a JWT to a standard credential or presentation , the following transformation MUST be performed:
To transform the JWT specific headers and claims , the following MUST be done:
In the example above, the verifiable credential uses a proof based on JWS digital signatures, and the corresponding verification key can be obtained using the kid header parameter.
In the example above, vc does not contain the id property because the JWT encoding uses the jti attribute to represent a unique identifier. The sub attribute encodes the information represented by the id property of credentialSubject . The nonce has been added to stop a replay attack.
In the example above, the verifiable presentation uses a proof based on JWS digital signatures, and the corresponding verification key can be obtained using the kid header parameter.
In the example above, vp does not contain the id property because the JWT encoding uses the jti attribute to represent a unique identifier. verifiableCredential contains a string array of verifiable credentials using JWT compact serialization. The nonce has been added to stop a replay attack.
This specification utilizes Linked Data to publish information on the Web using standards, such as URLs and JSON-LD, to identify subjects and their associated properties. When information is presented in this manner, other related information can be easily discovered and new information can be easily merged into the existing graph of knowledge. Linked Data is extensible in a decentralized way, greatly reducing barriers to large scale integration. The data model in this specification works well with Data Integrity and the associated Linked Data Cryptographic Suites which are designed to protect the data model as described by this specification.
Unlike the use of JSON Web Token, no extra pre- or post-processing is necessary. The Data Integrity Proofs format was designed to simply and easily protect verifiable credentials and verifiable presentations . Protecting a verifiable credential or verifiable presentation is as simple as passing a valid example in this specification to a Linked Data Signatures implementation and generating a digital signature.
For more information about the different qualities of the various syntax formats (for example, JSON+JWT, JSON-LD+JWT, or JSON-LD+LD-Proofs), see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.
This section details the general privacy considerations and specific privacy implications of deploying the Verifiable Credentials Data Model into production environments.
It is important to recognize there is a spectrum of privacy ranging from pseudonymous to strongly identified. Depending on the use case, people have different comfort levels about what information they are willing to provide and what information can be derived from what is provided.
For example, most people probably want to remain anonymous when purchasing alcohol because the regulatory check required is solely based on whether a person is above a specific age. Alternatively, for medical prescriptions written by a doctor for a patient, the pharmacy fulfilling the prescription is required to more strongly identify the medical professional and the patient. Therefore there is not one approach to privacy that works for all use cases. Privacy solutions are use case specific.
Even for those wanting to remain anonymous when purchasing alcohol, photo identification might still be required to provide appropriate assurance to the merchant. The merchant might not need to know your name or other details (other than that you are over a specific age), but in many cases just proof of age might still be insufficient to meet regulations.
The Verifiable Credentials Data Model strives to support the full privacy spectrum and does not take philosophical positions on the correct level of anonymity for any specific transaction. The following sections provide guidance for implementers who want to avoid specific scenarios that are hostile to privacy.
Data associated with verifiable credentials stored in the credential.credentialSubject field is susceptible to privacy violations when shared with verifiers . Personally identifying data, such as a government-issued identifier, shipping address, and full name, can be easily used to determine, track, and correlate an entity . Even information that does not seem personally identifiable, such as the combination of a birthdate and a postal code, has very powerful correlation and de-anonymizing capabilities.
Implementers are strongly advised to warn holders when they share data with these kinds of characteristics. Issuers are strongly advised to provide privacy-protecting verifiable credentials when possible. For example, issuing ageOver verifiable credentials instead of date of birth verifiable credentials when a verifier wants to determine if an entity is over the age of 18.
Because a verifiable credential often contains personally identifiable information (PII), implementers are strongly advised to use mechanisms while storing and transporting verifiable credentials that protect the data from those who should not access it. Mechanisms that could be considered include Transport Layer Security (TLS) or other means of encrypting the data while in transit, as well as encryption or data access control mechanisms to protect the data in a verifiable credential while at rest.
Subjects of verifiable credentials are identified using the credential.credentialSubject.id field. The identifiers used to identify a subject create a greater risk of correlation when the identifiers are long-lived or used across more than one web domain.
Similarly, disclosing the credential identifier ( credential.id ) leads to situations where multiple verifiers , or an issuer and a verifier , can collude to correlate the holder . If holders want to reduce correlation, they should use verifiable credential schemes that allow hiding the identifier during verifiable presentation . Such schemes expect the holder to generate the identifier and might even allow hiding the identifier from the issuer , while still keeping the identifier embedded and signed in the verifiable credential .
If strong anti-correlation properties are a requirement in a verifiable credentials system, it is strongly advised that identifiers are either:
The contents of verifiable credentials are secured using the credential.proof field. The properties in this field create a greater risk of correlation when the same values are used across more than one session or domain and the value does not change. Examples include the verificationMethod , created , proofPurpose , and jws fields.
If strong anti-correlation properties are required, it is advised that signature values and metadata are regenerated each time using technologies like third-party pairwise signatures, zero-knowledge proofs, or group signatures.
Even when using anti-correlation signatures, information might still be contained in a verifiable credential that defeats the anti-correlation properties of the cryptography used.
Verifiable credentials might contain long-lived identifiers that could be used to correlate individuals. These types of identifiers include subject identifiers, email addresses, government-issued identifiers, organization-issued identifiers, addresses, healthcare vitals, verifiable credential -specific JSON-LD contexts, and many other sorts of long-lived identifiers.
Organizations providing software to holders should strive to identify fields in verifiable credentials containing information that could be used to correlate individuals and warn holders when this information is shared.
There are mechanisms external to verifiable credentials that are used to track and correlate individuals on the Internet and the Web. Some of these mechanisms include Internet protocol (IP) address tracking, web browser fingerprinting, evercookies, advertising network trackers, mobile network position information, and in-application Global Positioning System (GPS) APIs. Using verifiable credentials cannot prevent the use of these other tracking technologies. Also, when these technologies are used in conjunction with verifiable credentials , new correlatable information could be discovered. For example, a birthday coupled with a GPS position can be used to strongly correlate an individual across multiple websites.
It is recommended that privacy-respecting systems prevent the use of these other tracking technologies when verifiable credentials are being used. In some cases, tracking technologies might need to be disabled on devices that transmit verifiable credentials on behalf of a holder .
To enable recipients of verifiable credentials to use them in a variety of circumstances without revealing more PII than necessary for transactions, issuers should consider limiting the information published in a credential to a minimal set needed for the expected purposes. One way to avoid placing PII in a credential is to use an abstract property that meets the needs of verifiers without providing specific information about a subject .
For example, this document uses the ageOver property instead of a specific birthdate, which constitutes much stronger PII. If retailers in a specific market commonly require purchasers to be older than a certain age, an issuer trusted in that market might choose to offer a verifiable credential claiming that subjects have met that requirement instead of offering verifiable credentials containing claims about specific birthdates. This enables individual customers to make purchases without revealing specific PII.
Privacy violations occur when information divulged in one context leaks into another. Accepted best practice for preventing such violations is to limit the information requested, and received, to the absolute minimum necessary. This data minimization approach is required by regulation in multiple jurisdictions, including the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in the European Union.
With verifiable credentials , data minimization for issuers means limiting the content of a verifiable credential to the minimum required by potential verifiers for expected use. For verifiers , data minimization means limiting the scope of the information requested or required for accessing services.
For example, a driver's license containing a driver's ID number, height, weight, birthday, and home address is a credential containing more information than is necessary to establish that the person is above a certain age.
It is considered best practice for issuers to atomize information or use a signature scheme that allows for selective disclosure . For example, an issuer of driver's licenses could issue a verifiable credential containing every attribute that appears on a driver's license, as well as a set of verifiable credentials where every verifiable credential contains only a single attribute, such as a person's birthday. It could also issue more abstract verifiable credentials (for example, a verifiable credential containing only an ageOver attribute). One possible adaptation would be for issuers to provide secure HTTP endpoints for retrieving single-use bearer credentials that promote the pseudonymous usage of verifiable credentials . Implementers that find this impractical or unsafe, should consider using selective disclosure schemes that eliminate dependence on issuers at proving time and reduce temporal correlation risk from issuers .
Verifiers are urged to only request information that is absolutely necessary for a specific transaction to occur. This is important for at least two reasons. It:
While it is possible to practice the principle of minimum disclosure, it might be impossible to avoid the strong identification of an individual for specific use cases during a single session or over multiple sessions. The authors of this document cannot stress how difficult it is to meet this principle in real-world scenarios.
A bearer credential is a privacy-enhancing piece of information, such as a concert ticket, which entitles the holder of the bearer credential to a specific resource without divulging sensitive information about the holder . Bearer credentials are often used in low-risk use cases where the sharing of the bearer credential is not a concern or would not result in large economic or reputational losses.
Verifiable credentials that are bearer credentials are made possible by not specifying the subject identifier, expressed using the id property , which is nested in the credentialSubject property . For example, the following verifiable credential is a bearer credential :
While bearer credentials can be privacy-enhancing, they must be carefully crafted so as not accidentally divulge more information than the holder of the bearer credential expects. For example, repeated use of the same bearer credential across multiple sites enables these sites to potentially collude to unduly track or correlate the holder . Likewise, information that might seem non-identifying, such as a birthdate and postal code, can be used to statistically identify an individual when used together in the same bearer credential or session.
Issuers of bearer credentials should ensure that the bearer credentials provide privacy-enhancing benefits that:
Holders should be warned by their software if bearer credentials containing sensitive information are issued or requested, or if there is a correlation risk when combining two or more bearer credentials across one or more sessions. While it might be impossible to detect all correlation risks, some might certainly be detectable.
Verifiers should not request bearer credentials that can be used to unduly correlate the holder .
When processing verifiable credentials , verifiers are expected to perform many of the checks listed in Appendix A. Validation as well as a variety of specific business process checks. Validity checks might include checking:
The process of performing these checks might result in information leakage that leads to a privacy violation of the holder . For example, a simple operation such as checking a revocation list can notify the issuer that a specific business is likely interacting with the holder . This could enable issuers to collude and correlate individuals without their knowledge.
Issuers are urged to not use mechanisms, such as credential revocation lists that are unique per credential , during the verification process that could lead to privacy violations. Organizations providing software to holders should warn when credentials include information that could lead to privacy violations during the verification process. Verifiers should consider rejecting credentials that produce privacy violations or that enable bad privacy practices.
When a holder receives a verifiable credential from an issuer , the verifiable credential needs to be stored somewhere (for example, in a credential repository). Holders are warned that the information in a verifiable credential is sensitive in nature and highly individualized, making it a high value target for data mining. Services that advertise free storage of verifiable credentials might in fact be mining personal data and selling it to organizations wanting to build individualized profiles on people and organizations.
Holders need to be aware of the terms of service for their credential repository, specifically the correlation and data mining protections in place for those who store their verifiable credentials with the service provider.
Some effective mitigations for data mining and profiling include using:
Holding two pieces of information about the same subject almost always reveals more about the subject than just the sum of the two pieces, even when the information is delivered through different channels. The aggregation of verifiable credentials is a privacy risk and all participants in the ecosystem need to be aware of the risks of data aggregation.
For example, if two bearer credentials , one for an email address and then one stating the holder is over the age of 21, are provided across multiple sessions, the verifier of the information now has a unique identifier as well as age-related information for that individual. It is now easy to create and build a profile for the holder such that more and more information is leaked over time. Aggregation of credentials can also be performed across multiple sites in collusion with each other, leading to privacy violations.
From a technological perspective, preventing aggregation of information is a very difficult privacy problem to address. While new cryptographic techniques, such as zero-knowledge proofs, are being proposed as solutions to the problem of aggregation and correlation, the existence of long-lived identifiers and browser tracking techniques defeats even the most modern cryptographic techniques.
The solution to the privacy implications of correlation or aggregation tends not to be technological in nature, but policy driven instead. Therefore, if a holder does not want information about them to be aggregated, they must express this in the verifiable presentations they transmit.
Despite the best efforts to assure privacy, actually using verifiable credentials can potentially lead to de-anonymization and a loss of privacy. This correlation can occur when:
In part, it is possible to mitigate this de-anonymization and loss of privacy by:
It is understood that these mitigation techniques are not always practical or even compatible with necessary usage. Sometimes correlation is a requirement.
For example, in some prescription drug monitoring programs, usage monitoring is a requirement. Enforcement entities need to be able to confirm that individuals are not cheating the system to get multiple prescriptions for controlled substances. This statutory or regulatory need to correlate usage overrides individual privacy concerns.
Verifiable credentials will also be used to intentionally correlate individuals across services, for example, when using a common persona to log in to multiple services, so all activity on each of those services is intentionally linked to the same individual. This is not a privacy issue as long as each of those services uses the correlation in the expected manner.
Privacy risks of credential usage occur when unintended or unexpected correlation arises from the presentation of credentials .
When a holder chooses to share information with a verifier , it might be the case that the verifier is acting in bad faith and requests information that could be used to harm the holder . For example, a verifier might ask for a bank account number, which could then be used with other information to defraud the holder or the bank.
Issuers should strive to tokenize as much information as possible such that if a holder accidentally transmits credentials to the wrong verifier , the situation is not catastrophic.
For example, instead of including a bank account number for the purpose of checking an individual's bank balance, provide a token that enables the verifier to check if the balance is above a certain amount. In this case, the bank could issue a verifiable credential containing a balance checking token to a holder . The holder would then include the verifiable credential in a verifiable presentation and bind the token to a credit checking agency using a digital signature. The verifier could then wrap the verifiable presentation in their digital signature, and hand it back to the issuer to dynamically check the account balance.
Using this approach, even if a holder shares the account balance token with the wrong party, an attacker cannot discover the bank account number, nor the exact value in the account. And given the validity period for the counter-signature, does not gain access to the token for more than a few minutes.
As detailed in Section 7.13 Usage Patterns , usage patterns can be correlated into certain types of behavior. Part of this correlation is mitigated when a holder uses a verifiable credential without the knowledge of the issuer . Issuers can defeat this protection however, by making their verifiable credentials short lived and renewal automatic.
For example, an ageOver verifiable credential is useful for gaining access to a bar. If an issuer issues such a verifiable credential with a very short expiration date and an automatic renewal mechanism, then the issuer could possibly correlate the behavior of the holder in a way that negatively impacts the holder .
Organizations providing software to holders should warn them if they repeatedly use credentials with short lifespans, which could result in behavior correlation. Issuers should avoid issuing credentials in a way that enables them to correlate usage patterns.
An ideal privacy-respecting system would require only the information necessary for interaction with the verifier to be disclosed by the holder . The verifier would then record that the disclosure requirement was met and forget any sensitive information that was disclosed. In many cases, competing priorities, such as regulatory burden, prevent this ideal system from being employed. In other cases, long-lived identifiers prevent single use. The design of any verifiable credentials ecosystem, however, should strive to be as privacy-respecting as possible by preferring single-use verifiable credentials whenever possible.
Using single-use verifiable credentials provides several benefits. The first benefit is to verifiers who can be sure that the data in a verifiable credential is fresh. The second benefit is to holders , who know that if there are no long-lived identifiers in the verifiable credential , the verifiable credential itself cannot be used to track or correlate them online. Finally, there is nothing for attackers to steal, making the entire ecosystem safer to operate within.
In an ideal private browsing scenario, no PII will be revealed. Because many credentials include PII, organizations providing software to holders should warn them about the possibility of revealing this information if they wish to use credentials and presentations while in private browsing mode. As each browser vendor handles private browsing differently, and some browsers might not have this feature at all, it is important for implementers to be aware of these differences and implement solutions accordingly.
It cannot be overstated that verifiable credentials rely on a high degree of trust in issuers . The degree to which a holder might take advantage of possible privacy protections often depends strongly on the support an issuer provides for such features. In many cases, privacy protections which make use of zero-knowledge proofs, data minimization techniques, bearer credentials, abstract claims, and protections against signature-based correlation, require the issuer to actively support such capabilities and incorporate them into the verifiable credentials they issue.
It should also be noted that, in addition to a reliance on issuer participation to provide verifiable credential capabilities that help preserve holder and subject privacy, holders rely on issuers to not deliberately subvert privacy protections. For example, an issuer might sign verifiable credentials using a signature scheme that protects against signature-based correlation. This would protect the holder from being correlated by the signature value as it is shared among verifiers . However, if the issuer creates a unique key for each issued credential , it might be possible for the issuer to track presentations of the credential , regardless of a verifier 's inability to do so.
There are a number of security considerations that issuers , holders , and verifiers should be aware of when processing data described by this specification. Ignoring or not understanding the implications of this section can result in security vulnerabilities.
While this section attempts to highlight a broad set of security considerations, it is not a complete list. Implementers are urged to seek the advice of security and cryptography professionals when implementing mission critical systems using the technology outlined in this specification.
Some aspects of the data model described in this specification can be protected through the use of cryptography. It is important for implementers to understand the cryptography suites and libraries used to create and process credentials and presentations . Implementing and auditing cryptography systems generally requires substantial experience. Effective red teaming can also help remove bias from security reviews.
Cryptography suites and libraries have a shelf life and eventually fall to new attacks and technology advances. Production quality systems need to take this into account and ensure mechanisms exist to easily and proactively upgrade expired or broken cryptography suites and libraries, and to invalidate and replace existing credentials . Regular monitoring is important to ensure the long term viability of systems processing credentials .
Verifiable credentials often contain URLs to data that resides outside of the verifiable credential itself. Linked content that exists outside a verifiable credential , such as images, JSON-LD Contexts, and other machine-readable data, are often not protected against tampering because the data resides outside of the protection of the proof on the verifiable credential . For example, the following highlighted links are not content-integrity protected but probably should be:
While this specification does not recommend any specific content integrity protection, document authors who want to ensure links to content are integrity protected are advised to use URL schemes that enforce content integrity. Two such schemes are the [ HASHLINK ] specification and the [ IPFS ]. The example below transforms the previous example and adds content integrity protection to the JSON-LD Contexts using the [ HASHLINK ] specification, and content integrity protection to the image by using an [ IPFS ] link.
It is debatable whether the JSON-LD Contexts above need protection because production implementations are expected to ship with static copies of important JSON-LD Contexts.
While the example above is one way to achieve content integrity protection, there are other solutions that might be better suited for certain applications. Implementers are urged to understand how links to external machine-readable content that are not content-integrity protected could result in successful attacks against their applications.
This specification allows credentials to be produced that do not contain signatures or proofs of any kind. These types of credentials are often useful for intermediate storage, or self-asserted information, which is analogous to filling out a form on a web page. Implementers should be aware that these types of credentials are not verifiable because the authorship either is not known or cannot be trusted.
A verifier might need to ensure it is the intended recipient of a verifiable presentation and not the target of a man-in-the-middle attack . Approaches such as token binding [ RFC8471 ], which ties the request for a verifiable presentation to the response, can secure the protocol. Any unsecured protocol is susceptible to man-in-the-middle attacks.
It is considered best practice for issuers to atomize information in a credential , or use a signature scheme that allows for selective disclosure. In the case of atomization, if it is not done securely by the issuer , the holder might bundle together different credentials in a way that was not intended by the issuer .
For example, a university might issue two verifiable credentials to a person, each containing two properties , which must be taken together to to designate the "role" of that person in a given "department", such as "Staff Member" in the "Department of Computing", or "Post Graduate Student" in the "Department of Economics". If these verifiable credentials are atomized to put only one of these properties into each credential , then the university would issue four credentials to the person, each containing one of the following designations: "Staff Member", "Post Graduate Student", "Department of Computing", and "Department of Economics". The holder might then transfer the "Staff Member" and "Department of Economics" verifiable credentials to a verifier , which together would comprise a false claim .
When verifiable credentials are issued for highly dynamic information, implementers should ensure the expiration times are set appropriately. Expiration periods longer than the timeframe where the verifiable credential is valid might create exploitable security vulnerabilities. Expiration periods shorter than the timeframe where the information expressed by the verifiable credential is valid creates a burden on holders and verifiers . It is therefore important to set validity periods for verifiable credentials that are appropriate to the use case and the expected lifetime for the information contained in the verifiable credential .
When verifiable credentials are stored on a device and that device is lost or stolen, it might be possible for an attacker to gain access to systems using the victim's verifiable credentials . Ways to mitigate this type of attack include:
There are a number of accessibility considerations implementers should be aware of when processing data described in this specification. As with implementation of any web standard or protocol, ignoring accessibility issues makes this information unusable by a large subset of the population. It is important to follow accessibility guidelines and standards, such as [ WCAG21 ], to ensure that all people, regardless of ability, can make use of this data. This is especially important when establishing systems utilizing cryptography, which have historically created problems for assistive technologies.
This section details the general accessibility considerations to take into account when utilizing this data model.
Many physical credentials in use today, such as government identification cards, have poor accessibility characteristics, including, but not limited to, small print, reliance on small and high-resolution images, and no affordances for people with vision impairments.
When utilizing this data model to create verifiable credentials , it is suggested that data model designers use a data first approach. For example, given the choice of using data or a graphical image to depict a credential , designers should express every element of the image, such as the name of an institution or the professional credential , in a machine-readable way instead of relying on a viewer's interpretation of the image to convey this information. Using a data first approach is preferred because it provides the foundational elements of building different interfaces for people with varying abilities.
Implementers are advised to be aware of a number of internationalization considerations when publishing data described in this specification. As with any web standards or protocols implementation, ignoring internationalization makes it difficult for data to be produced and consumed across a disparate set of languages and societies, which limits the applicability of the specification and significantly diminishes its value as a standard.
Implementers are strongly advised to read the Strings on the Web: Language and Direction Metadata document [ STRING-META ], published by the W3C Internationalization Activity, which elaborates on the need to provide reliable metadata about text to support internationalization. For the latest information on internationalization considerations, implementers are also urged to read the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.
This section outlines general internationalization considerations to take into account when utilizing this data model and is intended to highlight specific parts of the Strings on the Web: Language and Direction Metadata document [ STRING-META ] that implementers might be interested in reading.
Data publishers are strongly encouraged to read the section on Cross-Syntax Expression in the Strings on the Web: Language and Direction Metadata document [ STRING-META ] to ensure that the expression of language and base direction information is possible across multiple expression syntaxes, such as [ JSON-LD ], [ JSON ], and CBOR [ RFC7049 ].
The general design pattern is to use the following markup template when expressing a text string that is tagged with a language and, optionally, a specific base direction.
Using the design pattern above, the following example expresses the title of a book in the English language without specifying a text direction.
The next example uses a similar title expressed in the Arabic language with a base direction of right-to-left.
The text above would most likely be rendered incorrectly as left-to-right without the explicit expression of language and direction because many systems use the first character of a text string to determine text direction.
Implementers utilizing JSON-LD are strongly urged to extend the JSON-LD Context defining the internationalized property and use the Scoped Context feature of JSON-LD to alias the @value , @language , and @direction keywords to value , lang , and dir , respectively. An example of a JSON-LD Context snippet doing this is shown below.
When multiple languages, base directions, and annotations are used in a single natural language string, more complex mechanisms are typically required. It is possible to use markup languages, such as HTML, to encode text with multiple languages and base directions. It is also possible to use the rdf:HTML datatype to encode such values accurately in JSON-LD.
Despite the ability to encode information as HTML, implementers are strongly discouraged from doing this because it:
If implementers feel they must use HTML, or other markup languages capable of containing executable scripts, to address a specific use case, they are advised to analyze how an attacker would use the markup to mount injection attacks against a consumer of the markup and then deploy mitigations against the identified attacks.
While this specification does not provide conformance criteria for the process of the validation of verifiable credentials or verifiable presentations , readers might be curious about how the information in this data model is expected to be utilized by verifiers during the process of validation . This section captures a selection of conversations held by the Working Group related to the expected usage of the data fields in this specification by verifiers .
In the verifiable credentials presented by a holder , the value associated with the id property for each credentialSubject is expected to identify a subject to the verifier . If the holder is also the subject , then the verifier could authenticate the holder if they have public key metadata related to the holder . The verifier could then authenticate the holder using a signature generated by the holder contained in the verifiable presentation . The id property is optional. Verifiers could use other properties in a verifiable credential to uniquely identify a subject .
For information on how authentication and WebAuthn might work with verifiable credentials , see the Verifiable Credentials Implementation Guidelines [ VC-IMP-GUIDE ] document.
The value associated with the issuer property is expected to identify an issuer that is known to and trusted by the verifier .
Relevant metadata about the issuer property is expected to be available to the verifier . For example, an issuer can publish information containing the public keys it uses to digitally sign verifiable credentials that it issued. This metadata is relevant when checking the proofs on the verifiable credentials .
The issuanceDate is expected to be within an expected range for the verifier . For example, a verifier can check that the issuance date of a verifiable credential is not in the future.
The cryptographic mechanism used to prove that the information in a verifiable credential or verifiable presentation was not tampered with is called a proof . There are many types of cryptographic proofs including, but not limited to, digital signatures, zero-knowledge proofs, Proofs of Work, and Proofs of Stake. In general, when verifying proofs, implementations are expected to ensure:
Some proofs are digital signatures. In general, when verifying digital signatures, implementations are expected to ensure:
The digital signature provides a number of protections, other than tamper resistance, which are not immediately obvious. For example, a Linked Data Signature created property establishes a date and time before which the credential should not be considered verified . The verificationMethod property specifies, for example, the public key that can be used to verify the digital signature. Dereferencing a public key URL reveals information about the controller of the key, which can be checked against the issuer of the credential . The proofPurpose property clearly expresses the purpose for the proof and ensures this information is protected by the signature. A proof is typically attached to a verifiable presentation for authentication purposes and to a verifiable credential as a method of assertion.
The expirationDate is expected to be within an expected range for the verifier . For example, a verifier can check that the expiration date of a verifiable credential is not in the past.
If the credentialStatus property is available, the status of a verifiable credential is expected to be evaluated by the verifier according to the credentialStatus type definition for the verifiable credential and the verifier's own status evaluation criteria. For example, a verifier can ensure the status of the verifiable credential is not "withdrawn for cause by the issuer ".
Fitness for purpose is about whether the custom properties in the verifiable credential are appropriate for the verifier's purpose. For example, if a verifier needs to determine whether a subject is older than 21 years of age, they might rely on a specific birthdate property , or on more abstract properties , such as ageOver .
The issuer is trusted by the verifier to make the claims at hand. For example, a franchised fast food restaurant location trusts the discount coupon claims made by the corporate headquarters of the franchise. Policy information expressed by the issuer in the verifiable credential should be respected by holders and verifiers unless they accept the liability of ignoring the policy.
B.1 base context.
The base context, located at https://www.w3.org/2018/credentials/v1 with a SHA-256 digest of ab4ddd9a531758807a79a5b450510d61ae8d147eab966cc9a200c07095b0cdcc , can be used to implement a local cached copy. For convenience, the base context is also provided in this section.
The verifiable credential and verifiable presentation data models leverage a variety of underlying technologies including [ JSON-LD ] and [ JSON-SCHEMA-2018 ]. This section will provide a comparison of the @context , type , and credentialSchema properties, and cover some of the more specific use cases where it is possible to use these features of the data model.
The type property is used to uniquely identify the type of the verifiable credential in which it appears, i.e., to indicate which set of claims the verifiable credential contains. This property, and the value VerifiableCredential within the set of its values, are mandatory. Whilst it is good practice to include one additional value depicting the unique subtype of this verifiable credential , it is permitted to either omit or include additional type values in the array. Many verifiers will request a verifiable credential of a specific subtype, then omitting the subtype value could make it more difficult for verifiers to inform the holder which verifiable credential they require. When a verifiable credential has multiple subtypes, listing all of them in the type property is sensible. While the semantics are the same in both a [ JSON ] and [ JSON-LD ] representation, the usage of the type property in a [ JSON-LD ] representation of a verifiable credential is able to enforce the semantics of the verifiable credential better than a [ JSON ] representation of the same credential because the machine is able to check the semantics. With [ JSON-LD ], the technology is not only describing the categorization of the set of claims, the technology is also conveying the structure and semantics of the sub-graph of the properties in the graph. In [ JSON-LD ], this represents the type of the node in the graph which is why some [ JSON-LD ] representations of a verifiable credential will use the type property on many objects in the verifiable credential .
The primary purpose of the @context property, from a [ JSON-LD ] perspective, is to convey the meaning of the data and term definitions of the data in a verifiable credential , in a machine readable way. When encoding a pure [ JSON ] representation, the @context property remains mandatory and provides some basic support for global semantics. The @context property is used to map the globally unique URIs for properties in verifiable credentials and verifiable presentations into short-form alias names, making both the [ JSON ] and [ JSON-LD ] representations more human-friendly to read. From a [ JSON-LD ] perspective, this mapping also allows the data in a credential to be modeled in a network of machine-readable data, by enhancing how the data in the verifiable credential or verifiable presentation relates to a larger machine-readable data graph. This is useful for telling machines how to relate the meaning of data to other data in an ecosystem where parties are unable to coordinate. This property, with the first value in the set being https://www.w3.org/2018/credentials/v1 , is mandatory.
Since the @context property is used to map data to a graph data model, and the type property in [ JSON-LD ] is used to describe nodes within the graph, the type property becomes even more important when using the two properties in combination. For example, if the type property is not included within the resolved @context resource using [ JSON-LD ], it could lead to claims being dropped and/or their integrity no longer being protected during production and consumption of the verifiable credential . Alternatively, it could lead to errors being raised during production or consumption of a verifiable credential . This will depend on the design choices of the implementation and both paths are used in implementations today, so it's important to pay attention to these properties when using a [ JSON-LD ] representation of a verifiable credential or verifiable presentation .
The primary purpose of the credentialSchema property is to define the structure of the verifiable credential , and the datatypes for the values of each property that appears. A credentialSchema is useful for defining the contents and structure of a set of claims in a verifiable credential , whereas [ JSON-LD ] and a @context in a verifiable credential are best used only for conveying the semantics and term definitions of the data, and can be used to define the structure of the verifiable credential as well.
While it is possible to use some [ JSON-LD ] features to allude to the contents of the verifiable credential , it's not generally suggested to use @context to constrain the data types of the data model. For example, "@type": "@json" is useful for leaving the semantics open-ended and not strictly defined. This can be dangerous if the implementer is looking to constrain the data type of the claims in the credential , and is expected not to be used.
When the credentialSchema and @context properties are used in combination, both producers and consumers can be more confident about the expected contents and data types of the verifiable credential and verifiable presentation .
This section describes possible relationships between a subject and a holder and how the Verifiable Credentials Data Model expresses these relationships. The following diagram illustrates these relationships, with the subsequent sections describing how each of these relationships are handled in the data model.
The most common relationship is when a subject is the holder . In this case, a verifier can easily deduce that a subject is the holder if the verifiable presentation is digitally signed by the holder and all contained verifiable credentials are about a subject that can be identified to be the same as the holder .
If only the credentialSubject is allowed to insert a verifiable credential into a verifiable presentation , the issuer can insert the nonTransferable property into the verifiable credential , as described below.
The nonTransferable property indicates that a verifiable credential must only be encapsulated into a verifiable presentation whose proof was issued by the credentialSubject . A verifiable presentation that contains a verifiable credential containing the nonTransferable property , whose proof creator is not the credentialSubject , is invalid.
In this case, the credentialSubject property might contain multiple properties , each providing an aspect of a description of the subject , which combine together to unambiguously identify the subject . Some use cases might not require the holder to be identified at all, such as checking to see if a doctor (the subject ) is board-certified. Other use cases might require the verifier to use out-of-band knowledge to determine the relationship between the subject and the holder .
The example above uniquely identifies the subject using the name, address, and birthdate of the individual.
Usually verifiable credentials are presented to verifiers by the subject . However, in some cases, the subject might need to pass the whole or part of a verifiable credential to another holder . For example, if a patient (the subject ) is too ill to take a prescription (the verifiable credential ) to the pharmacist (the verifier ), a friend might take the prescription in to pick up the medication.
The data model allows for this by letting the subject issue a new verifiable credential and give it to the new holder , who can then present both verifiable credentials to the verifier . However, the content of this second verifiable credential is likely to be application-specific, so this specification cannot standardize the contents of this second verifiable credential . Nevertheless, a non-normative example is provided in Appendix C.5 Subject Passes a Verifiable Credential to Someone Else .
The Verifiable Credentials Data Model supports the holder acting on behalf of the subject in at least the following ways. The:
The mechanisms listed above describe the relationship between the holder and the subject and helps the verifier decide whether the relationship is sufficiently expressed for a given use case.
The additional mechanisms the issuer or the verifier uses to verify the relationship between the subject and the holder are outside the scope of this specification.
In the example above, the issuer expresses the relationship between the child and the parent such that a verifier would most likely accept the credential if it is provided by the child or the parent.
In the example above, the issuer expresses the relationship between the child and the parent in a separate credential such that a verifier would most likely accept any of the child's credentials if they are provided by the child or if the credential above is provided with any of the child's credentials .
In the example above, the child expresses the relationship between the child and the parent in a separate credential such that a verifier would most likely accept any of the child's credentials if the credential above is provided.
Similarly, the strategies described in the examples above can be used for many other types of use cases, including power of attorney, pet ownership, and patient prescription pickup.
When a subject passes a verifiable credential to another holder , the subject might issue a new verifiable credential to the holder in which the:
The holder can now create a verifiable presentation containing these two verifiable credentials so that the verifier can verify that the subject gave the original verifiable credential to the holder .
In the above example, a patient (the original subject ) passed a prescription (the original verifiable credential ) to a friend, and issued a new verifiable credential to the friend, in which the friend is the subject , the subject of the original verifiable credential is the issuer , and the credential is a copy of the original prescription.
When an issuer wants to authorize a holder to possess a credential that describes a subject who is not the holder , and the holder has no known relationship with the subject , then the issuer might insert the relationship of the holder to itself into the subject's credential .
Verifiable credentials are not an authorization framework and therefore delegation is outside the scope of this specification. However, it is understood that verifiable credentials are likely to be used to build authorization and delegation systems. The following is one approach that might be appropriate for some use cases.
The Verifiable Credentials Data Model currently does not support either of these scenarios. It is for further study how they might be supported.
This section will be submitted to the Internet Engineering Steering Group (IESG) for review, approval, and registration with IANA in the "JSON Web Token Claims Registry".
This section contains the substantive changes that have been made since the publication of v1.0 of this specification as a W3C Recommendation.
Changes since the Recommendation :
The Working Group thanks the following individuals not only for their contributions toward the content of this document, but also for yeoman's work in this standards community that drove changes, discussion, and consensus among a sea of varied opinions: Matt Stone, Gregg Kellogg, Ted Thibodeau Jr, Oliver Terbu, Joe Andrieu, David I. Lehn, Matthew Collier, and Adrian Gropper.
Work on this specification has been supported by the Rebooting the Web of Trust community facilitated by Christopher Allen, Shannon Appelcline, Kiara Robles, Brian Weller, Betty Dhamers, Kaliya Young, Manu Sporny, Drummond Reed, Joe Andrieu, Heather Vescent, Kim Hamilton Duffy, Samantha Chase, and Andrew Hughes. The participants in the Internet Identity Workshop, facilitated by Phil Windley, Kaliya Young, Doc Searls, and Heidi Nobantu Saul, also supported the refinement of this work through numerous working sessions designed to educate about, debate on, and improve this specification.
The Working Group also thanks our Chairs, Dan Burnett, Matt Stone, Brent Zundel, and Wayne Chang, as well as our W3C Staff Contacts, Kazuyuki Ashimura and Ivan Herman, for their expert management and steady guidance of the group through the W3C standardization process.
Portions of the work on this specification have been funded by the United States Department of Homeland Security's Science and Technology Directorate under contract HSHQDC-17-C-00019. The content of this specification does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred.
The Working Group would like to thank the following individuals for reviewing and providing feedback on the specification (in alphabetical order):
Christopher Allen, David Ammouial, Joe Andrieu, Bohdan Andriyiv, Ganesh Annan, Kazuyuki Ashimura, Tim Bouma, Pelle Braendgaard, Dan Brickley, Allen Brown, Jeff Burdges, Daniel Burnett, ckennedy422, David Chadwick, Chaoxinhu, Kim (Hamilton) Duffy, Lautaro Dragan, enuoCM, Ken Ebert, Eric Elliott, William Entriken, David Ezell, Nathan George, Reto Gmür, Ryan Grant, glauserr, Adrian Gropper, Joel Gustafson, Amy Guy, Lovesh Harchandani, Daniel Hardman, Dominique Hazael-Massieux, Jonathan Holt, David Hyland-Wood, Iso5786, Renato Iannella, Richard Ishida, Ian Jacobs, Anil John, Tom Jones, Rieks Joosten, Gregg Kellogg, Kevin, Eric Korb, David I. Lehn, Michael Lodder, Dave Longley, Christian Lundkvist, Jim Masloski, Pat McBennett, Adam C. Migus, Liam Missin, Alexander Mühle, Anthony Nadalin, Clare Nelson, Mircea Nistor, Grant Noble, Darrell O'Donnell, Nate Otto, Matt Peterson, Addison Phillips, Eric Prud'hommeaux, Liam Quin, Rajesh Rathnam, Drummond Reed, Yancy Ribbens, Justin Richer, Evstifeev Roman, RorschachRev, Steven Rowat, Pete Rowley, Markus Sabadello, Kristijan Sedlak, Tzviya Seigman, Reza Soltani, Manu Sporny, Orie Steele, Matt Stone, Oliver Terbu, Ted Thibodeau Jr, John Tibbetts, Mike Varley, Richard Varn, Heather Vescent, Christopher Lemmer Webber, Benjamin Young, Kaliya Young, Dmitri Zagidulin, and Brent Zundel.
G.1 normative references, g.2 informative references.
Referenced in:
Building a healthier future for all
Healthy People 2030 sets data-driven national objectives to improve health and well-being over the next decade.
Healthy People 2030 includes 359 core — or measurable — objectives as well as developmental and research objectives.
Learn more about the types of objectives .
Social determinants of health have a major impact on people's health and well-being — and they're a key focus of Healthy People 2030.
Leading Health Indicators (LHIs) are a small subset of high-priority objectives selected to drive action toward improving health and well-being.
Healthy People 2030’s disparities data feature allows you to track changes in disparities to see where we’re improving as a nation — and where we need to increase our efforts.
Healthy People 2030 provides hundreds of evidence-based resources to help you address public health priorities.
Registration is now open for the next healthy people 2030 webinar, air quality matters: improving health and lung function with healthy people 2030 objectives.
The Office of Disease Prevention and Health Promotion (ODPHP) is pleased to announce its next Healthy People 2030 webinar: Air Quality Matters: Improving Health and Lung Function with Healthy People 2030 Objectives. This webinar will take place on Wednesday, June 12 from 2:00 to 3:00 pm ET. Continuing Education Credits (CEs) are available.
Linking to a non-federal website does not constitute an endorsement by ODPHP or any of its employees of the sponsors or the information and products presented on the website.
You will be subject to the destination website's privacy policy when you follow the link.
Transforming the understanding and treatment of mental illnesses.
Información en español
What is adhd.
Attention-deficit/hyperactivity disorder (ADHD) is marked by an ongoing pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development. People with ADHD experience an ongoing pattern of the following types of symptoms:
Some people with ADHD mainly have symptoms of inattention. Others mostly have symptoms of hyperactivity-impulsivity. Some people have both types of symptoms.
Many people experience some inattention, unfocused motor activity, and impulsivity, but for people with ADHD, these behaviors:
People with symptoms of inattention may often:
People with symptoms of hyperactivity-impulsivity may often:
Primary care providers sometimes diagnose and treat ADHD. They may also refer individuals to a mental health professional, such as a psychiatrist or clinical psychologist, who can do a thorough evaluation and make an ADHD diagnosis.
For a person to receive a diagnosis of ADHD, the symptoms of inattention and/or hyperactivity-impulsivity must be chronic or long-lasting, impair the person’s functioning, and cause the person to fall behind typical development for their age. Stress, sleep disorders, anxiety, depression, and other physical conditions or illnesses can cause similar symptoms to those of ADHD. Therefore, a thorough evaluation is necessary to determine the cause of the symptoms.
Most children with ADHD receive a diagnosis during the elementary school years. For an adolescent or adult to receive a diagnosis of ADHD, the symptoms need to have been present before age 12.
ADHD symptoms can appear as early as between the ages of 3 and 6 and can continue through adolescence and adulthood. Symptoms of ADHD can be mistaken for emotional or disciplinary problems or missed entirely in children who primarily have symptoms of inattention, leading to a delay in diagnosis. Adults with undiagnosed ADHD may have a history of poor academic performance, problems at work, or difficult or failed relationships.
ADHD symptoms can change over time as a person ages. In young children with ADHD, hyperactivity-impulsivity is the most predominant symptom. As a child reaches elementary school, the symptom of inattention may become more prominent and cause the child to struggle academically. In adolescence, hyperactivity seems to lessen and symptoms may more likely include feelings of restlessness or fidgeting, but inattention and impulsivity may remain. Many adolescents with ADHD also struggle with relationships and antisocial behaviors. Inattention, restlessness, and impulsivity tend to persist into adulthood.
Researchers are not sure what causes ADHD, although many studies suggest that genes play a large role. Like many other disorders, ADHD probably results from a combination of factors. In addition to genetics, researchers are looking at possible environmental factors that might raise the risk of developing ADHD and are studying how brain injuries, nutrition, and social environments might play a role in ADHD.
ADHD is more common in males than females, and females with ADHD are more likely to primarily have inattention symptoms. People with ADHD often have other conditions, such as learning disabilities, anxiety disorder, conduct disorder, depression, and substance use disorder.
While there is no cure for ADHD, currently available treatments may reduce symptoms and improve functioning. Treatments include medication, psychotherapy, education or training, or a combination of treatments.
For many people, ADHD medications reduce hyperactivity and impulsivity and improve their ability to focus, work, and learn. Sometimes several different medications or dosages must be tried before finding the right one that works for a particular person. Anyone taking medications must be monitored closely by their prescribing doctor.
Stimulants. The most common type of medication used for treating ADHD is called a “stimulant.” Although it may seem unusual to treat ADHD with a medication that is considered a stimulant, it works by increasing the brain chemicals dopamine and norepinephrine, which play essential roles in thinking and attention.
Under medical supervision, stimulant medications are considered safe. However, like all medications, they can have side effects, especially when misused or taken in excess of the prescribed dose, and require an individual’s health care provider to monitor how they may be reacting to the medication.
Non-stimulants. A few other ADHD medications are non-stimulants. These medications take longer to start working than stimulants, but can also improve focus, attention, and impulsivity in a person with ADHD. Doctors may prescribe a non-stimulant: when a person has bothersome side effects from stimulants, when a stimulant was not effective, or in combination with a stimulant to increase effectiveness.
Although not approved by the U.S. Food and Drug Administration (FDA) specifically for the treatment of ADHD, some antidepressants are used alone or in combination with a stimulant to treat ADHD. Antidepressants may help all of the symptoms of ADHD and can be prescribed if a patient has bothersome side effects from stimulants. Antidepressants can be helpful in combination with stimulants if a patient also has another condition, such as an anxiety disorder, depression, or another mood disorder. Non-stimulant ADHD medications and antidepressants may also have side effects.
Doctors and patients can work together to find the best medication, dose, or medication combination. To find the latest information about medications, talk to a health care provider and visit the FDA website .
Several specific psychosocial interventions have been shown to help individuals with ADHD and their families manage symptoms and improve everyday functioning.
For school-age children, frustration, blame, and anger may have built up within a family before a child is diagnosed. Parents and children may need specialized help to overcome negative feelings. Mental health professionals can educate parents about ADHD and how it affects a family. They also will help the child and his or her parents develop new skills, attitudes, and ways of relating to each other.
All types of therapy for children and teens with ADHD require parents to play an active role. Psychotherapy that includes only individual treatment sessions with the child (without parent involvement) is not effective for managing ADHD symptoms and behavior. This type of treatment is more likely to be effective for treating symptoms of anxiety or depression that may occur along with ADHD.
Behavioral therapy is a type of psychotherapy that aims to help a person change their behavior. It might involve practical assistance, such as help organizing tasks or completing schoolwork, or working through emotionally difficult events. Behavioral therapy also teaches a person how to:
Parents, teachers, and family members also can give feedback on certain behaviors and help establish clear rules, chore lists, and structured routines to help a person control their behavior. Therapists may also teach children social skills, such as how to wait their turn, share toys, ask for help, or respond to teasing. Learning to read facial expressions and the tone of voice in others, and how to respond appropriately can also be part of social skills training.
Cognitive behavioral therapy helps a person learn how to be aware and accepting of one’s own thoughts and feelings to improve focus and concentration. The therapist also encourages the person with ADHD to adjust to the life changes that come with treatment, such as thinking before acting, or resisting the urge to take unnecessary risks.
Family and marital therapy can help family members and spouses find productive ways to handle disruptive behaviors, encourage behavior changes, and improve interactions with the person with ADHD.
Parenting skills training (behavioral parent management training) teaches parents skills for encouraging and rewarding positive behaviors in their children. Parents are taught to use a system of rewards and consequences to change a child’s behavior, to give immediate and positive feedback for behaviors they want to encourage, and to ignore or redirect behaviors they want to discourage.
Specific behavioral classroom management interventions and/or academic accommodations for children and teens have been shown to be effective for managing symptoms and improving functioning at school and with peers. Interventions may include behavior management plans or teaching organizational or study skills. Accommodations may include preferential seating in the classroom, reduced classwork load, or extended time on tests and exams. The school may provide accommodations through what is called a 504 Plan or, for children who qualify for special education services, an Individualized Education Plan (IEP).
To learn more about the Individuals with Disabilities Education Act (IDEA), visit the U.S. Department of Education’s IDEA website .
Stress management techniques can benefit parents of children with ADHD by increasing their ability to deal with frustration so that they can respond calmly to their child’s behavior.
Support groups can help parents and families connect with others who have similar problems and concerns. Groups often meet regularly to share frustrations and successes, to exchange information about recommended specialists and strategies, and to talk with experts.
The National Resource Center on ADHD, a program of Children and Adults with Attention-Deficit/Hyperactivity Disorder (CHADD®) supported by the Centers for Disease Control and Prevention (CDC), has information and many resources. You can reach this center online or by phone at 1-866-200-8098.
Learn more about psychotherapy .
Parents and teachers can help kids with ADHD stay organized and follow directions with tools such as:
For adults:
A professional counselor or therapist can help an adult with ADHD learn how to organize their life with tools such as:
Clinical trials are research studies that look at new ways to prevent, detect, or treat diseases and conditions. The goal of clinical trials is to determine if a new test or treatment works and is safe. Although individuals may benefit from being part of a clinical trial, participants should be aware that the primary purpose of a clinical trial is to gain new scientific knowledge so that others may be better helped in the future.
Researchers at NIMH and around the country conduct many studies with patients and healthy volunteers. We have new and better treatment options today because of what clinical trials uncovered years ago. Be part of tomorrow’s medical breakthroughs. Talk to your health care provider about clinical trials, their benefits and risks, and whether one is right for you.
To learn more or find a study, visit:
Free brochures and shareable resources.
Last Reviewed: September 2023
Unless otherwise specified, the information on our website and in our publications is in the public domain and may be reused or copied without permission. However, you may not reuse or copy images. Please cite the National Institute of Mental Health as the source. Read our copyright policy to learn more about our guidelines for reusing NIMH content.
Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox , Microsoft Edge , Google Chrome , or Safari 14 or newer. If you are unable to, and need support, please send us your feedback .
We'd appreciate your feedback. Tell us what you think! opens in new tab/window
CRediT (Contributor Roles Taxonomy) was introduced with the intention of recognizing individual author contributions, reducing authorship disputes and facilitating collaboration. The idea came about following a 2012 collaborative workshop led by Harvard University and the Wellcome Trust, with input from researchers, the International Committee of Medical Journal Editors (ICMJE) and publishers, including Elsevier, represented by Cell Press.
CRediT offers authors the opportunity to share an accurate and detailed description of their diverse contributions to the published work.
The corresponding author is responsible for ensuring that the descriptions are accurate and agreed by all authors
The role(s) of all authors should be listed, using the relevant above categories
Authors may have contributed in multiple roles
CRediT in no way changes the journal’s criteria to qualify for authorship
CRediT statements should be provided during the submission process and will appear above the acknowledgment section of the published paper as shown further below.
Term | Definition |
---|---|
Conceptualization | Ideas; formulation or evolution of overarching research goals and aims |
Methodology | Development or design of methodology; creation of models |
Software | Programming, software development; designing computer programs; implementation of the computer code and supporting algorithms; testing of existing code components |
Validation | Verification, whether as a part of the activity or separate, of the overall replication/ reproducibility of results/experiments and other research outputs |
Formal analysis | Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data |
Investigation | Conducting a research and investigation process, specifically performing the experiments, or data/evidence collection |
Resources | Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools |
Data Curation | Management activities to annotate (produce metadata), scrub data and maintain research data (including software code, where it is necessary for interpreting the data itself) for initial use and later reuse |
Writing - Original Draft | Preparation, creation and/or presentation of the published work, specifically writing the initial draft (including substantive translation) |
Writing - Review & Editing | Preparation, creation and/or presentation of the published work by those from the original research group, specifically critical review, commentary or revision – including pre-or postpublication stages |
Visualization | Preparation, creation and/or presentation of the published work, specifically visualization/ data presentation |
Supervision | Oversight and leadership responsibility for the research activity planning and execution, including mentorship external to the core team |
Project administration | Management and coordination responsibility for the research activity planning and execution |
Funding acquisition | Acquisition of the financial support for the project leading to this publication |
*Reproduced from Brand et al. (2015), Learned Publishing 28(2), with permission of the authors.
Zhang San: Conceptualization, Methodology, Software Priya Singh. : Data curation, Writing- Original draft preparation. Wang Wu : Visualization, Investigation. Jan Jansen : Supervision. : Ajay Kumar : Software, Validation.: Sun Qi: Writing- Reviewing and Editing,
Read more about CRediT here opens in new tab/window or check out this article from Authors' Updat e: CRediT where credit's due .
IMAGES
VIDEO
COMMENTS
Steps in a Trial. Evidence. The heart of the case is the presentation of evidence. There are two types of evidence -- direct and circumstantial . Direct evidence usually is that which speaks for itself: eyewitness accounts, a confession, or a weapon. Circumstantial evidence usually is that which suggests a fact by implication or inference: the ...
Compilation and Presentation of Evidence. Evidence is how you or the opposing party can prove or refute the facts in your case. When presenting evidence in a trial, it's essential to consider a series of recommendations to avoid problems in the final stages of the case, states our Head of Litigation and Arbitration Department, Rubén Rivas ...
Presentation of Evidence The compelling presentation of evidence is a key dimension of a paper's quality. ASQ welcomes submissions from authors who think seriously about how to present their evidence in ways that make a paper easy to understand and compelling for readers. Part of researchers' craft is to draw
The order in which a criminal jury trial proceeds is governed by G.S. 15A-1221. After a jury is impaneled and an opportunity for opening statements is given, the State must present evidence of the defendant's guilt, that is, its "case-in-chief.". See G.S. 15A-1221(a)(5). The State goes first because it has the burden of proof.
Evidence, in law, any of the material items or assertions of fact that may be submitted to a competent tribunal as a means of ascertaining the truth of any alleged matter of fact under investigation before it. ... the presentation of documents or physical objects, or the assertion of a foreign law. The many rules of evidence that have evolved ...
The different categories of evidence that you will come across in your study of the law of evidence are outlined below. It is important to note that there is a degree of overlap between them, so they are not mutually exclusive. 1.4.1 Direct evidence Direct evidence is evidence which directly proves or disproves a fact in issue. An obvious
Books, journals, websites, newspapers, magazines, and documentary films are some of the most common sources of evidence for academic writing. Our handout on evaluating print sources will help you choose your print sources wisely, and the library has a tutorial on evaluating both print sources and websites. A librarian can help you find sources ...
Determine the Presentation of Evidence. If both authentication and admissibility are established, then the court must determine how the evidence will best be presented to the trier of fact, bearing in mind that the court is obligated to exercise control over the presentation of evidence to accomplish an effective, fair, and efficient proceeding.
The presentation of evidence at trial is governed and regulated by the jurisdiction's rules of evidence. Types of Evidence Evidence comes in many forms, as by its very definition, evidence is any thing presented to prove that something is true.
10 Steps for Presenting Evidence in Court. When you go to court, you will give information (called "evidence") to a judge who will decide your case. This evidence may include information you or someone else tells to the judge ("testimony") as well as items like email and text messages, documents, photos, and objects ("exhibits").
Steps in a Trial. Presentation of Evidence by the Defense. The defense lawyer may choose not to present evidence, in the belief that the plaintiff or government did not prove its case. Usually, however, the defense will offer evidence. In a criminal case, the witnesses presented by the defense may or may not include the defendant.
evidence. Evidence an item or information proffered to make the existence of a fact more or less probable. Evidence can take the form of testimony , documents, photographs, videos, voice recordings, DNA testing, or other tangible objects. Courts cannot admit all evidence, as evidence must be admissible under that jurisdiction's rules of ...
Evidence needs to be carefully chosen to serve the needs of the claim and to reach the target audience. An argument is designed to persuade a resistant audience to accept a claim via the presentation of evidence for the contentions being argued. The evidence establishes the amount of accuracy your arguments have.
Presentation of Evidence THE power of administrative tribunals to disregard the common-law exclusionary rules of evidence has not resulted, as is often erroneously assumed, in their being utterly ignored in administrative proceedings involving the adjudication of judicial questions. In cases involving the dis
The second definition is contained in the United States' Federal Rule of Evidence 401 which ... 274). A further objection is that the management of parties' conduct relating to evidence preservation and presentation should be left to judges and not to the jury. What a judge may do to optimize evidential weight is to impose a burden of ...
This definition underscores the interdisciplinary nature of forensic evidence, emphasizing its reliance on scientific principles to uncover truths that may otherwise remain concealed within the complexities of criminal cases. ... This exploration of expert witnesses and the presentation of forensic evidence underscores the multidimensional ...
Assertion-evidence talks are more focused, understood better by audiences, and delivered with more confidence. ... Christine Haas, a professional presentations instructor, discusses how to incorporate your own presentation into an assertion-evidence template. Hannah Salas, who is a undergraduate mechanical engineer from University of Nevada at ...
Evidence is defined as a means whereby any alleged matter of fact, the truth of which is submitted to investigation, is proved and includes statements by defendants, admission, judicial notices, presumptions of law, and observation by the court in its
Presentation of Evidence. Pursuant to 5 USCS § 556, an administrative law judge is authorized to regulate the course of a hearing. An administrative judge has broad discretion to allow or exclude witness testimony. [i] Moreover, the judge has the power to sequestrate witnesses to ensure that witnesses provide testimony without being influenced ...
Duplicate presentation of the same evidence should be avoided wherever possible. (d) Authenticity. The authenticity of all documents submitted as proposed exhibits in advance of the hearing shall be deemed admitted unless written objection thereto is filed prior to the hearing, except that a party will be permitted to challenge such ...
The ability to present complex forensic evidence in a courtroom in a manner that is fully comprehensible to all stakeholders remains problematic. Individual subjective interpretations may impede a collective and correct understanding of the complex environments and the evidence therein presented to them. This is not fully facilitated or assisted in any way with current non-technological ...
Examples of presentation of evidence in a sentence, how to use it. 20 examples: Bayesian methods have significant advantages over classical frequentist statistical methods and the…
Definition of "evidence in chief". It's the main set of facts or proof presented by one side to establish their argument or claim. How to use "evidence in chief" in a sentence. The lawyer prepared thoroughly for the presentation of the evidence in chief. The judge reminded the party that any omission in the evidence in chief could be ...
A verifiable presentation is a tamper-evident presentation encoded in such a way that authorship of the data can be trusted after a process of cryptographic verification. ... The precise content of each evidence scheme is determined by the specific evidence type definition. Note.
Thanks to the iPad and associated apps, presenting evidence in the courtroom requires a smaller team and much less upheaval than was necessary in the past. There are two basic methods for presenting from an iPad: wired and wireless. You don't need a presentation-specific app to show documents on the iPad.
Evidence-Based Resources. Healthy People 2030 provides hundreds of evidence-based resources to help you address public health priorities. Browse evidence-based resources. Healthy People in Action Spotlight. Healthy People News and Events
Some people with ADHD mainly have symptoms of inattention. Others mostly have symptoms of hyperactivity-impulsivity. Some people have both types of symptoms.
Definition. Conceptualization. Ideas; formulation or evolution of overarching research goals and aims. ... specifically performing the experiments, or data/evidence collection. Resources. Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other analysis tools ...