Democratic National Convention (DNC) in Chicago

Samantha Putterman, PolitiFact Samantha Putterman, PolitiFact

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/politics/fact-checking-warnings-from-democrats-about-project-2025-and-donald-trump

Fact-checking warnings from Democrats about Project 2025 and Donald Trump

This fact check originally appeared on PolitiFact .

Project 2025 has a starring role in this week’s Democratic National Convention.

And it was front and center on Night 1.

WATCH: Hauling large copy of Project 2025, Michigan state Sen. McMorrow speaks at 2024 DNC

“This is Project 2025,” Michigan state Sen. Mallory McMorrow, D-Royal Oak, said as she laid a hardbound copy of the 900-page document on the lectern. “Over the next four nights, you are going to hear a lot about what is in this 900-page document. Why? Because this is the Republican blueprint for a second Trump term.”

Vice President Kamala Harris, the Democratic presidential nominee, has warned Americans about “Trump’s Project 2025” agenda — even though former President Donald Trump doesn’t claim the conservative presidential transition document.

“Donald Trump wants to take our country backward,” Harris said July 23 in Milwaukee. “He and his extreme Project 2025 agenda will weaken the middle class. Like, we know we got to take this seriously, and can you believe they put that thing in writing?”

Minnesota Gov. Tim Walz, Harris’ running mate, has joined in on the talking point.

“Don’t believe (Trump) when he’s playing dumb about this Project 2025. He knows exactly what it’ll do,” Walz said Aug. 9 in Glendale, Arizona.

Trump’s campaign has worked to build distance from the project, which the Heritage Foundation, a conservative think tank, led with contributions from dozens of conservative groups.

Much of the plan calls for extensive executive-branch overhauls and draws on both long-standing conservative principles, such as tax cuts, and more recent culture war issues. It lays out recommendations for disbanding the Commerce and Education departments, eliminating certain climate protections and consolidating more power to the president.

Project 2025 offers a sweeping vision for a Republican-led executive branch, and some of its policies mirror Trump’s 2024 agenda, But Harris and her presidential campaign have at times gone too far in describing what the project calls for and how closely the plans overlap with Trump’s campaign.

PolitiFact researched Harris’ warnings about how the plan would affect reproductive rights, federal entitlement programs and education, just as we did for President Joe Biden’s Project 2025 rhetoric. Here’s what the project does and doesn’t call for, and how it squares with Trump’s positions.

Are Trump and Project 2025 connected?

To distance himself from Project 2025 amid the Democratic attacks, Trump wrote on Truth Social that he “knows nothing” about it and has “no idea” who is in charge of it. (CNN identified at least 140 former advisers from the Trump administration who have been involved.)

The Heritage Foundation sought contributions from more than 100 conservative organizations for its policy vision for the next Republican presidency, which was published in 2023.

Project 2025 is now winding down some of its policy operations, and director Paul Dans, a former Trump administration official, is stepping down, The Washington Post reported July 30. Trump campaign managers Susie Wiles and Chris LaCivita denounced the document.

WATCH: A look at the Project 2025 plan to reshape government and Trump’s links to its authors

However, Project 2025 contributors include a number of high-ranking officials from Trump’s first administration, including former White House adviser Peter Navarro and former Housing and Urban Development Secretary Ben Carson.

A recently released recording of Russell Vought, a Project 2025 author and the former director of Trump’s Office of Management and Budget, showed Vought saying Trump’s “very supportive of what we do.” He said Trump was only distancing himself because Democrats were making a bogeyman out of the document.

Project 2025 wouldn’t ban abortion outright, but would curtail access

The Harris campaign shared a graphic on X that claimed “Trump’s Project 2025 plan for workers” would “go after birth control and ban abortion nationwide.”

The plan doesn’t call to ban abortion nationwide, though its recommendations could curtail some contraceptives and limit abortion access.

What’s known about Trump’s abortion agenda neither lines up with Harris’ description nor Project 2025’s wish list.

Project 2025 says the Department of Health and Human Services Department should “return to being known as the Department of Life by explicitly rejecting the notion that abortion is health care.”

It recommends that the Food and Drug Administration reverse its 2000 approval of mifepristone, the first pill taken in a two-drug regimen for a medication abortion. Medication is the most common form of abortion in the U.S. — accounting for around 63 percent in 2023.

If mifepristone were to remain approved, Project 2025 recommends new rules, such as cutting its use from 10 weeks into pregnancy to seven. It would have to be provided to patients in person — part of the group’s efforts to limit access to the drug by mail. In June, the U.S. Supreme Court rejected a legal challenge to mifepristone’s FDA approval over procedural grounds.

WATCH: Trump’s plans for health care and reproductive rights if he returns to White House The manual also calls for the Justice Department to enforce the 1873 Comstock Act on mifepristone, which bans the mailing of “obscene” materials. Abortion access supporters fear that a strict interpretation of the law could go further to ban mailing the materials used in procedural abortions, such as surgical instruments and equipment.

The plan proposes withholding federal money from states that don’t report to the Centers for Disease Control and Prevention how many abortions take place within their borders. The plan also would prohibit abortion providers, such as Planned Parenthood, from receiving Medicaid funds. It also calls for the Department of Health and Human Services to ensure that the training of medical professionals, including doctors and nurses, omits abortion training.

The document says some forms of emergency contraception — particularly Ella, a pill that can be taken within five days of unprotected sex to prevent pregnancy — should be excluded from no-cost coverage. The Affordable Care Act requires most private health insurers to cover recommended preventive services, which involves a range of birth control methods, including emergency contraception.

Trump has recently said states should decide abortion regulations and that he wouldn’t block access to contraceptives. Trump said during his June 27 debate with Biden that he wouldn’t ban mifepristone after the Supreme Court “approved” it. But the court rejected the lawsuit based on standing, not the case’s merits. He has not weighed in on the Comstock Act or said whether he supports it being used to block abortion medication, or other kinds of abortions.

Project 2025 doesn’t call for cutting Social Security, but proposes some changes to Medicare

“When you read (Project 2025),” Harris told a crowd July 23 in Wisconsin, “you will see, Donald Trump intends to cut Social Security and Medicare.”

The Project 2025 document does not call for Social Security cuts. None of its 10 references to Social Security addresses plans for cutting the program.

Harris also misleads about Trump’s Social Security views.

In his earlier campaigns and before he was a politician, Trump said about a half-dozen times that he’s open to major overhauls of Social Security, including cuts and privatization. More recently, in a March 2024 CNBC interview, Trump said of entitlement programs such as Social Security, “There’s a lot you can do in terms of entitlements, in terms of cutting.” However, he quickly walked that statement back, and his CNBC comment stands at odds with essentially everything else Trump has said during the 2024 presidential campaign.

Trump’s campaign website says that not “a single penny” should be cut from Social Security. We rated Harris’ claim that Trump intends to cut Social Security Mostly False.

Project 2025 does propose changes to Medicare, including making Medicare Advantage, the private insurance offering in Medicare, the “default” enrollment option. Unlike Original Medicare, Medicare Advantage plans have provider networks and can also require prior authorization, meaning that the plan can approve or deny certain services. Original Medicare plans don’t have prior authorization requirements.

The manual also calls for repealing health policies enacted under Biden, such as the Inflation Reduction Act. The law enabled Medicare to negotiate with drugmakers for the first time in history, and recently resulted in an agreement with drug companies to lower the prices of 10 expensive prescriptions for Medicare enrollees.

Trump, however, has said repeatedly during the 2024 presidential campaign that he will not cut Medicare.

Project 2025 would eliminate the Education Department, which Trump supports

The Harris campaign said Project 2025 would “eliminate the U.S. Department of Education” — and that’s accurate. Project 2025 says federal education policy “should be limited and, ultimately, the federal Department of Education should be eliminated.” The plan scales back the federal government’s role in education policy and devolves the functions that remain to other agencies.

Aside from eliminating the department, the project also proposes scrapping the Biden administration’s Title IX revision, which prohibits discrimination based on sexual orientation and gender identity. It also would let states opt out of federal education programs and calls for passing a federal parents’ bill of rights similar to ones passed in some Republican-led state legislatures.

Republicans, including Trump, have pledged to close the department, which gained its status in 1979 within Democratic President Jimmy Carter’s presidential Cabinet.

In one of his Agenda 47 policy videos, Trump promised to close the department and “to send all education work and needs back to the states.” Eliminating the department would have to go through Congress.

What Project 2025, Trump would do on overtime pay

In the graphic, the Harris campaign says Project 2025 allows “employers to stop paying workers for overtime work.”

The plan doesn’t call for banning overtime wages. It recommends changes to some Occupational Safety and Health Administration, or OSHA, regulations and to overtime rules. Some changes, if enacted, could result in some people losing overtime protections, experts told us.

The document proposes that the Labor Department maintain an overtime threshold “that does not punish businesses in lower-cost regions (e.g., the southeast United States).” This threshold is the amount of money executive, administrative or professional employees need to make for an employer to exempt them from overtime pay under the Fair Labor Standards Act.

In 2019, the Trump’s administration finalized a rule that expanded overtime pay eligibility to most salaried workers earning less than about $35,568, which it said made about 1.3 million more workers eligible for overtime pay. The Trump-era threshold is high enough to cover most line workers in lower-cost regions, Project 2025 said.

The Biden administration raised that threshold to $43,888 beginning July 1, and that will rise to $58,656 on Jan. 1, 2025. That would grant overtime eligibility to about 4 million workers, the Labor Department said.

It’s unclear how many workers Project 2025’s proposal to return to the Trump-era overtime threshold in some parts of the country would affect, but experts said some would presumably lose the right to overtime wages.

Other overtime proposals in Project 2025’s plan include allowing some workers to choose to accumulate paid time off instead of overtime pay, or to work more hours in one week and fewer in the next, rather than receive overtime.

Trump’s past with overtime pay is complicated. In 2016, the Obama administration said it would raise the overtime to salaried workers earning less than $47,476 a year, about double the exemption level set in 2004 of $23,660 a year.

But when a judge blocked the Obama rule, the Trump administration didn’t challenge the court ruling. Instead it set its own overtime threshold, which raised the amount, but by less than Obama.

Support Provided By: Learn more

Educate your inbox

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

chapter 3 in research quantitative

Low Latency Inference Chapter 1: Up to 1.9x Higher Llama 3.1 Performance with Medusa on NVIDIA HGX H200 with NVLink Switch

Image of an HGX H200

As large language models (LLMs) continue to grow in size and complexity, multi-GPU compute is a must-have to deliver the low latency and high throughput that real-time generative AI applications demand. 

Performance depends both on the ability for the combined GPUs to process requests as “one mighty GPU” with ultra-fast GPU-to-GPU communication and advanced software able to take full advantage of the multiple GPUs. By splitting the calculations of each model layer across the available GPUs using a technique called tensor parallelism in tandem with advanced algorithms like speculative decoding, token generation latency can be reduced, delivering an interactive user experience. 

For very low latency Llama 3.1 serving, cloud services can use a full NVIDIA HGX H200 server, each incorporating eight H200 Tensor Core GPUs and four all-to-all NVLink Switch chips. Each GPU within the server can communicate at the full 900 GB/s bandwidth to any other GPU via NVLink Switch. High GPU-to-GPU fabric bandwidth is required to keep multi-GPU communication from becoming the bottleneck in interactive use cases.

A photograph of an HGX H200 baseboard with the four NVSwitch

To efficiently implement optimization algorithms on NVIDIA H200 HGX systems, NVIDIA TensorRT-LLM is used. TensorRT-LLM is an open-source TensorRT library that delivers state-of-the-art inference performance on the latest LLMs using a variety of techniques, including tensor parallelism and speculative decoding.

Upcoming TensorRT-LLM optimizations, including the improvement of a speculative decoding algorithm called Medusa, provide outstanding low latency performance on Llama 3.1 70B and Llama 3.1 405B of 268 tokens/second/user and 108 tokens/second/user, respectively on HGX H200.

Medusa boosts token generation by up to 1.9x on NVIDIA HGX H200

Transformer-based LLMs are auto-regressive, meaning that tokens need to be generated sequentially, limiting throughput per generation step to just one token. Typically, during LLM inference, the rate at which a single token is generated depends on how quickly model weights are loaded into memory. This means that the workload can leave the substantial Tensor Core capabilities of H200 GPUs underutilized. 

Speculative decoding is a technique that increases token generation throughput per token generation step by using a “draft model” to try to predict multiple subsequent tokens beyond the next token. The target LLM then “batches” the prediction candidates and validates them in parallel with the next token, making more effective use of available parallel GPU compute resources. If any candidate sequence is accepted by the original LLM, multiple tokens are generated in the generation step and therefore accelerate token generation. 

Medusa, described in this paper , is a speculative decoding algorithm that uses the original model as the draft model, avoiding the system complexity and distribution discrepancy of using a separate draft model. This technique employs additional decoding “heads”, called Medusa heads, to predict candidate tokens beyond the next token. Each Medusa head generates a distribution of tokens beyond the previous. Then a tree-based attention mechanism samples some candidate sequences for the original model to validate. The number of parallel candidate sequences is called the draft length and the average number of tokens accepted per generation step is the acceptance rate. A greater acceptance rate increases overall token generation throughput. 

A bar chart showing HGX H200 Llama 3.1 70B performance on the left, 184 tokens/second/user without Medusa and 268 tokens/second/user with Medusa. On the right is Llama 3.1 405B performance showing 56 tokens/second/user without Medusa and 108 tokens/second/user with Medusa.

With Medusa, an HGX H200 is able to produce 268 tokens per second per user for Llama 3.1 70B and 108 for Llama 3.1 405B. This is over 1.5x faster on Llama 3.1 70B and over 1.9x faster on Llama 3.1 405B than without Medusa. Although there is variability in the Medusa acceptance rate between tasks depending on how the heads are fine-tuned, its overall performance is generalized across a wide range of tasks.

Medusa heads for both Llama 3.1 70B and Llama 3.1 405B were trained using the NVIDIA TensorRT Model Optimizer integration with NVIDIA NeMo framework. The Medusa head training used a frozen backbone, ensuring that use of Medusa yields identical accuracy to the base model.

NVIDIA full-stack innovation never stops

NVIDIA HGX H200 with NVLink Switch and TensorRT-LLM already delivers excellent real-time inference performance on popular and most demanding community models. To continue improving user experiences and reduce inference cost, we relentlessly innovate across every layer of the technology stack – chips, systems, software libraries, algorithms, and more. 

We look forward to sharing future updates on our low latency inference performance as both our platform and the LLM ecosystem advances. 

Related resources

  • GTC session: Accelerated LLM Model Alignment and Deployment in NeMo, TensorRT-LLM, and Triton Inference Server
  • GTC session: LLM Inference Sizing: Benchmarking End-to-End Inference Systems
  • GTC session: AI/ML Speech Recognition/Inferencing: NVIDIA Riva on Red Hat OpenShift with PowerFlex
  • NGC Containers: NVIDIA MLPerf Inference
  • NGC Containers: Llama-3-Swallow-70B-Instruct-v0.1

About the Authors

Avatar photo

Related posts

chapter 3 in research quantitative

Boosting Llama 3.1 405B Performance up to 1.44x with NVIDIA TensorRT Model Optimizer on NVIDIA H200 GPUs

Decorative image of linked modules.

NVIDIA NVLink and NVIDIA NVSwitch Supercharge Large Language Model Inference

Decorative image of a llama in cool sunglasses against a sunny landscape.

Supercharging Llama 3.1 across NVIDIA Platforms

chapter 3 in research quantitative

Achieving High Mixtral 8x7B Performance with NVIDIA H100 Tensor Core GPUs and NVIDIA TensorRT-LLM

An illustration showing the steps "LLM" then "Optimize" then "Deploy."

NVIDIA TensorRT-LLM Enhancements Deliver Massive Large Language Model Speedups on NVIDIA H200

Decorative image.

NVIDIA Triton Inference Server Achieves Outstanding Performance in MLPerf Inference 4.1 Benchmarks

chapter 3 in research quantitative

NVIDIA Blackwell Platform Sets New LLM Inference Records in MLPerf Inference v4.1

Workflow diagram on a black backgound.

Enhancing RAG Applications with NVIDIA NIM

An image of solar panels.

LLM Research Rewrites the Role of AI in Safeguarding Sustainable Systems

Financial Information Quality: Analysis of Cloud Accounting Adoption on UAE Firms

  • Conference paper
  • First Online: 28 August 2024
  • Cite this conference paper

chapter 3 in research quantitative

  • Nora Azima Noordin   ORCID: orcid.org/0000-0002-9756-8561 23 ,
  • Ahmad Hayek   ORCID: orcid.org/0000-0001-7828-403X 23 ,
  • Mirjana Sejdini   ORCID: orcid.org/0000-0002-9216-2140 23 ,
  • Aysha Humaid 23 ,
  • Nouf Sultan 23 ,
  • Bashair Abdulla 23 &
  • Mariam Yousif 23  

Part of the book series: Lecture Notes in Operations Research ((LNOR))

Included in the following conference series:

  • International Conference on Business Analytics in Practice

11 Accesses

This research investigates the impact of cloud accounting on the quality of financial information in the UAE through a quantitative method. A questionnaire was employed as the primary data collection tool, with 233 participants from various industries providing responses. The analysis yielded key findings supporting three hypotheses. H01, aligning with previous research confirmed the benefits of cloud accounting for small and medium-sized businesses, emphasizing its similarity to traditional accounting methods while ensuring secure financial information storage. Similarly, H02 indicated a significant effect of cloud accounting on the data efficiency of financial information quality, supported by studies that emphasizing the direct influence of information technology on data quality and efficiency. Furthermore, H03 revealed a notable impact on the data mining of financial information quality, corroborated by research highlighting the facilitative role of cloud computing in accessing and managing data. Overall, the study concludes that cloud accounting has a statistically significant and positive effect on data storage, data efficiency, and data mining in the selected UAE firms. These findings align with previous research on cloud computing's impact on accounting information systems and data mining tools, emphasizing the simplification of data access and central management of software and data storage. This research contributes to the understanding of cloud accounting's transformative potential, providing valuable insights for businesses and policymakers in the UAE.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Al Okaily, M., Al Khawaldi, A., Abdulrahman, A., & Al Qudah, H. (2022). Cloud-based accounting information systems usage and its impact on Jordanian SMEs’ performance: The post-COVID-19 perspective. Journal of Financial Reporting and Accounting, 21 (4), 126–155.

Google Scholar  

Parlińska, M., & Petrovska, I. (2017). Cloud computing and its benefits. Information Systems in Management, 6 (4), 309–317.

Tang, Q., Chen, H., & Lin, Z. (2016). How to measure country-level financial reporting quality? Journal of Financial Reporting and Accounting, 14 (2), 230–265.

Article   Google Scholar  

CertDa Homepage, Retrieved November 13, 2023 from https://accounting.binus.ac.id/2021/10/01/qualitative-characteristics-of-financial-reports/

Akpan, J. U., Igbekoyi, O. E., Ogungbade, O. I., & Osaloni, B. O. (2023). Effect of cloud accounting on financial information quality of selected firms in Nigeria. International Journal of Research and Innovation in Social Science, 7 (1), 1175–1193.

Dimitriu, O., & Matei, M. (2014). A new paradigm for accounting through cloud computing. Alexandru Ioan Cuza University of Iasi, 840–846

Noordin, N. A., Hussainey, K., & Hayek, A. F. (2022). The use of artificial intelligence and audit quality: An analysis from the perspectives of external auditors in the UAE. Journal of Risk and Financial Management, 15 (8), 339.

Hayek, A. F., Noordin, N. A., & Hussainey, K. (2022). Machine learning and external auditor perception: An analysis for UAE external auditors using technology acceptance model. Accounting and Management Information Systems, 21 (4), 475–500.

Hayek, A. F. (2022). Data science and external audit. In  Sustainable development through data analytics and innovation: Techniques, processes, models, tools, and practices . Cham: Springer International Publishing.

Georgen Scarborough Associates homepage. Retrieved November 12, 2023 from https://www.gsacpa.com/2021/07/13/how-financial-statements-can-be-helpful-in-decisionmaking/ .

CFA Institute homepage. Retrieved November 12, 2023 from https://www.cfainstitute.org/en/membership/professional-development/refresher-readings/evaluating-quality-financial-reports

Prajapati, J. Accounting and financial reporting in Dubai–guide.  Avyanco . Retrieved November 12, 2023 from https://avyanco.com/news/accounting-financial-reporting-in-dubai/

Yuan, Y., Zhao, L., & Zhang, Y. (2020). Cloud computing: A review of information security and financial implication. Journal of Cloud Computing: Advances, Systems and Applications, 9 (1), 1–14.

Skafi, M., Yunis, M. M., & Zekri, A. (2020). Factors influencing SMEs’ adoption of cloud computing services in Lebanon: An empirical analysis using TOE and contextual theory. IEEE Access, 8 , 79169–79181.

Mohammadi, S., & Mohammadi, A. (2014). Effect of cloud computing in accounting and comparison with the traditional model. Research Journal of Finance and Accounting, 5 (23), 104–114.

Alhelo, E. (2022). The impact of the adoption of cloud computing on improving the efficiency of accounting information systems during the COVID-19 pandemic (a field study on the service companies listed on the palestine stock exchange). Arab Journals Platform

Pawar, A. B., Ghumbre, S. U., & Jogdand, R. M. (2023). Study and analysis of various cloud security, authentication, and data storage models: A challenging overview. International Journal of Decision Support System Technology (IJDSST), 15 (1), 1–16.

Owolabi, S. A., & Izang, J. U. (2020). Cloud accounting and financial reporting. International Journal of Research Publication, 60 (1), 21–28.

Zebua, S. U. L. I. N. A., & Widuri, R. I. N. D. A. N. G. (2023). Analysis of factors affecting adoption of cloud accounting in Indonesia. Journal of Theoretical and Applied Information Technology, 101 (1), 89–105.

Smith, P. Improving data efficiency to produce better quality insights and improved productivity. RecordPoint. Retrieved November 15, 2023 from https://www.recordpoint.com/blog/improving-data-efficiency-to-produce-better-quality-insights-and-improved-productivity

Li, W., Zhou, Q., Ren, J., & Spector, S. (2019). Data mining optimization model for financial management information system based on improved genetic algorithm.  Information Systems and eBusiness Management, 1–19

Ali, A., Abd Razak, S., Othman, S. H., Eisa, T. A. E., Al-Dhaqm, A., Nasser, M., & Saif, A. (2022). Financial fraud detection based on machine learning: A systematic literature review. Applied Sciences, 12 (19), 9637.

Al-Zoubi, A. M. (2017). The effect of cloud computing on elements of accounting information system. Global Journal of Management and Business Research: Accounting and Auditing, 17 (3), 1–8.

Mugyenyi, R. (2018). Adoption of cloud computing services for sustainable development of commercial banks in Uganda. Global Journal of Computer Science and Technology: B Cloud and Distributed, 18 (1), 1–9.

Ogunsola, E. A. (2021). Effect of cloud accounting on the financial reporting quality of SMEs in Nigeria.  Bingham University Journal of Accounting and Business (BUJAB), 6 (2)

Owolabi, S. A., & Izang, J. U. (2020). Cloud accounting and financial reporting qualities of Smes in Nigeria: An overview. Cloud Accounting and Financial Reporting Qualities of SMEs in Nigeria: An Overview, 60 (1), 8–8.

Najafi, A., Soleimanpur, S., & Morady, Z. (2022). The impact of information technology methods on accounting information quality. Journal of Information and Organizational Sciences, 46 (1), 63–77.

Abdelraheem, A. a. E., Hussaien, A. M., Mohammed, M. a. A., & Elbokhari, Y. a. E. (2021). The effect of information technology on the quality of accounting information.  Accounting, 191–196

Sharma, R., Sharma, R., & Bhati, J. P. (2017). Impact of cloud computing datamining in digitalization. International Journal of Applied Engineering Research, 12 (22), 12716–12720.

Download references

Author information

Authors and affiliations.

Faculty of Business, Higher Colleges of Technology, Sharjah Women’s Campus, PO Box 7947, Sharjah, UAE

Nora Azima Noordin, Ahmad Hayek, Mirjana Sejdini, Aysha Humaid, Nouf Sultan, Bashair Abdulla & Mariam Yousif

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nora Azima Noordin .

Editor information

Editors and affiliations.

Surrey Business School, University of Surrey, Guildford, UK

Ali Emrouznejad

College of Business Administration, University of Sharjah, Sharjah, United Arab Emirates

Panagiotis D. Zervopoulos

University of Sharjah, College of Business Administration, University of Sharjah (UAE) and Nisantasi University (Turkey), Sharjah, United Arab Emirates

Ilhan Ozturk

Canadian University in Dubai, Dubai, United Arab Emirates

Dima Jamali

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Noordin, N.A. et al. (2024). Financial Information Quality: Analysis of Cloud Accounting Adoption on UAE Firms. In: Emrouznejad, A., Zervopoulos, P.D., Ozturk, I., Jamali, D., Rice, J. (eds) Business Analytics and Decision Making in Practice. ICBAP 2024. Lecture Notes in Operations Research. Springer, Cham. https://doi.org/10.1007/978-3-031-61589-4_25

Download citation

DOI : https://doi.org/10.1007/978-3-031-61589-4_25

Published : 28 August 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-61588-7

Online ISBN : 978-3-031-61589-4

eBook Packages : Business and Management Business and Management (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

We Trust in Human Precision

20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.

API Solutions

  • API Pricing
  • Cost estimate
  • Customer loyalty program
  • Educational Discount
  • Non-Profit Discount
  • Green Initiative Discount1

Value-Driven Pricing

Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.

PC editors choice

  • Special Discounts
  • Enterprise transcription solutions
  • Enterprise translation solutions
  • Transcription/Caption API
  • AI Transcription Proofreading API

Trusted by Global Leaders

GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.

GoTranscript

One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.

Speaker 1: In this video, we're going to explain exactly how to write up the results chapter for a quantitative study, whether that's a dissertation, thesis, or any other kind of academic research project. We'll walk you through the process step by step so that you can craft your results section with confidence. So, grab a cup of coffee, grab a cup of tea, whatever works for you, and let's jump into it. Hey, welcome to Grad Coach TV, where we demystify and simplify the oftentimes intimidating world of academic research. My name's Emma, and today we're going to explore the results chapter, which is also sometimes called the findings chapter in a dissertation or thesis. If you're new here, be sure to hit that subscribe button for more videos covering all things research related. Also, if you're looking for hands-on help with your research, check out our one-on-one coaching services, where we help you craft your research project step by step. It's basically like having a friendly professor in your pocket whenever you need it. If that sounds interesting to you, you can learn more and book a free consultation at www.gradcoach.com. All right, with that out of the way, let's get into it. Before we get into the nuts and bolts of how to write up the results chapter, it's useful to take a step back and ask the question, what exactly is a results chapter, and what purpose does it serve? If you understand both the what and the why, you'll have a much clearer direction in terms of the how. So, what's the results chapter all about then? Well, as the name suggests, the results chapter showcases the results of your quantitative analysis. In other words, it presents all the statistical data you've generated in a systematic and intuitive fashion. The results chapter is one of the most important chapters of your dissertation because it shows the reader what you found in terms of the quantitative data you've collected and analyzed. It presents the data using a clear text-based narrative, which is supported by tables, graphs, and charts. In addition to presenting these findings, it also highlights any potential issues you've come across, such as statistical outliers or unusual findings. But how's that different from the discussion chapter, you ask? Well, in the results chapter, you only present your statistical findings. Contrasted to this, in the discussion chapter, you interpret your findings and link them to prior research, in other words, your literature review, as well as your research objectives and research questions. Therefore, the key difference is that in the results chapter, you present and describe the data, while in the discussion chapter, you interpret the data and explain what it means in terms of the bigger picture. Let's take a look at an example. In your results chapter, you may have a plot that shows how respondents to a survey responded to a survey, the number of respondents per category, for instance. You may also state whether this supports one of your hypotheses by using a p-value from a statistical test. In other words, you're just presenting the facts and figures. Contrasted to this, in the discussion chapter, you will say why these statistical findings are relevant to your research question and how they compare with the existing literature. In other words, in the discussion chapter, you'll interpret your findings in relation to your research objectives. Long story short, the results chapter's job is purely to present and describe the data. So keep this in mind and make sure that you don't present anything other than the hard facts and figures. This is not the place for subjective interpretation. Now, a quick caveat. It's worth mentioning that some universities prefer you to combine the results and discussion chapters. Even so, it's still a good idea to separate the results and discussion elements within the chapter, as this ensures your findings are both described and interpreted in a consistent fashion. Typically, though, the results and discussion chapters are split up in quantitative studies. If you're unsure, chat with your research supervisor to find out what their preference is. All right, now that we've got that out of the way, we can look at how to write up the results chapter. Let's do it. There are multiple steps involved in writing up the results chapter for a quantitative study. The exact number of steps will vary from project to project and will depend on the nature of the research aims, objectives, and research questions. For example, some studies will make use of both descriptive and inferential statistics, while others will only use the former. So, in this video, we'll outline a generic process and structure that you can follow, but keep in mind that you may need to trim it down based on your specific research objectives. The first step in crafting your results chapter is to revisit your research objectives and questions. These will be, or at least should be, the driving force behind both your results and discussion chapters. During your statistical analysis, you will have generated a mountain of data, so you need to use your research objectives and questions to sift through this data and decide what's relevant. Therefore, the first step is for you to review your research objectives and research questions very closely and then ask yourself which statistical analyses and tests would specifically help you address these. For each research objective and research question, list the specific piece or pieces of analysis that address it. Keep this list handy as you'll revisit it multiple times as you craft your results chapter. At this stage, it's also useful to think about the key points that you want to raise in your discussion chapter and note these down. Every point you raise in your discussion chapter will need to be backed up in the results chapter, so you need to make sure that you lay a firm foundation there. So, jot down the main points you want to make in your discussion chapter and then list the specific piece of analysis that addresses each point. Having considered both of these areas, you should now have a short list of potential analyses and data points that you know need to be included in your results chapter in some shape or form. Next, you should draw up a rough outline of how you plan to structure your chapter. This doesn't need to be highly detailed, but you need to think about how you'll order the various analyses in your chapter so that there's a smooth logical flow. We'll discuss the standard structure of a quantitative results chapter in more detail shortly, but it's worth mentioning now that it's essential to draw up a rough outline before you start writing or you'll end up with a wishy-washy mess of information. This advice applies to any chapter, by the way. As with all chapters in your dissertation or thesis, you need to start your quantitative results chapter by providing a brief overview of what you'll do in the chapter and why. For example, you'd explain that you will start by presenting the sample demographic data to understand the composition and representativeness of the sample before moving on to X to understand Y and Z. This introduction section shouldn't be lengthy. A paragraph or two is more than enough. The aim is simply to give the reader a heads up about what you'll cover, not to provide a summary of the findings. Also, it's a good idea to weave the research questions into this section so that there's a golden thread that runs through your document. The first set of data that you'll typically present in your results chapter is an overview of the sample demographics. In other words, you'll give the reader an overview regarding the demographics of your survey respondents. For example, what age groups exist and how are they distributed? How is gender distributed? How is ethnicity distributed? What areas do the participants live in? Why is this important, you ask? Well, the purpose of this section is to assess how representative the sample is of the broader population. This is important for the sake of generalizability of the results. If your sample is not representative of the population, you won't be able to generalize your findings. This isn't necessarily a bad thing, but it's a limitation you'll need to acknowledge. Of course, to make this representativeness assessment, you'll need to already have an understanding of the demographics of the actual population you're interested in. So, make sure that you design your survey to capture the correct demographic information that you will compare your sample to. But, what if I'm not interested in generalizability, you say? Well, even if you don't intend to extrapolate your findings to the broader population, understanding your sample will allow you to interpret your findings appropriately, considering who responded. In other words, the demographic data will help you contextualize your findings accurately. For example, if 80% of your sample was aged over 65, this may be a noteworthy contextual factor to consider when interpreting the data. Similarly, if a large portion of your sample was skewed towards one gender, this would be an important contextual factor to note. Long story short, regardless of your intention to produce generalizable results, it's essential to understand and clearly communicate the demographic data of your sample. So, be sure to put in the time and effort in this section so that you can contextualize your findings accurately. Before you undertake your core statistical analysis, you need to do some checks to ensure that your data are suitable for the analysis methods and techniques you plan to use. If you try to analyze data that doesn't meet the assumptions of a specific statistical method or technique, your results will be largely meaningless. Therefore, you need to do some checks on your data before you jump into the actual analysis. Most commonly, there are two areas you need to pay attention to. The first thing you need to check is the reliability of your composite measures. When you have multiple scale-based measures that combine to capture one construct, this is called a composite measure. For example, you may have four Likert scales that all aim to measure the same thing, but are phrased in different ways. In other words, within a survey, these four scales should all receive similar ratings, assuming they are indeed measuring the same thing. This is called internal consistency. Unfortunately, internal consistency is not guaranteed, especially if you developed the scales yourself. So, you need to assess the reliability of each composite measure using a test. Cronbach's alpha is a common test used to assess internal consistency. In other words, to show that the items you're combining are more or less saying the same thing. A high alpha score means that your composite measure is internally consistent. A low alpha score means you may need to scrap one or more of the individual measures. There are tests other than Cronbach's alpha that can be used, and there's some hot debate about which one is the best, but we won't get into that here. The key takeaway is that you need to undertake some sort of testing to assess internal consistency, and you need to present those test results in this section of your chapter. Once you're comfortable that your composite measures are internally consistent, the next thing you need to look at is the shape of the data for each of your variables. What do you mean the shape of the data? Well, for each variable, you need to assess whether the data are symmetrical. In other words, normally distributed in a nice bell curve or not. This is important as it will directly impact what type of analysis methods and techniques you can use. For many common inferential tests, such as t-test and ANOVAs, we'll discuss these a bit later, don't stress, your data needs to be normally distributed. In other words, symmetrical. If it's not, you'll need to adjust your strategy and use alternative statistical tests. To assess the shape of the data, you'll usually assess a variety of fairly basic descriptive statistics, such as the mean, median, and skewness, which is exactly what we'll look at next. Now that you've laid the foundation by examining the representativeness of your sample, the reliability of your composite measures, and the shape of your data, you can get started with the actual statistical analysis. Finally. The first step is to present the descriptive statistics for your variables. As I mentioned, the descriptive statistics will help you assess the shape of your data. As I mentioned, the descriptive statistics will help you assess the shape of your data. But depending on the nature of your research, the descriptive statistics could also play an important role in directly addressing your research objectives and research questions. So, what are descriptive statistics? When we talk about descriptives, this usually includes basic statistics, such as the mean. This is simply the mathematical average of a range of numbers. The median. This is the midpoint in a range of numbers when the numbers are arranged in order. Standard deviation. This metric indicates how dispersed a range of numbers is. In other words, how close all the numbers are to the mean, the average. Skewness. This indicates how symmetrical a range of numbers is. In other words, do they tend to cluster into a smooth bell curve shape in the middle of the graph? Or do they lean to the left or the right? And lastly, kurtosis. This metric indicates whether the data are heavily or lightly tailed relative to the normal distribution. In other words, how peaked or flat the distribution is. If these statistics sound like gibberish to you, be sure to check out our video covering the basics of quantitative data analysis. I'll include the link below. When you're presenting your descriptive stats, using a large table to present all the stats for multiple variables can be a very effective way to present your data economically. This saves you a lot of space and makes it easier to compare and contrast the statistics for each variable. You can also use color coding to help make the data more easily digestible. For categorical data, for example, data that shows the percentage of people who chose or fit into each category, you can either just plain describe the percentages or numbers of people who responded to something, or you could use graphs and charts such as bar graphs and pie charts to present your data. There's no one size fits all approach here. In some cases, it will make more sense to just present the numbers in a table or a paragraph. For example, if there are only two categories. While in other cases, graphs and charts will be useful. For example, if there are multiple categories. A pro tip, when using charts and graphs, make sure that you label them simply and clearly so that your reader can easily understand them. There is nothing more frustrating than a graph that's missing axis labels. Keep in mind that although you'll be presenting tables, charts and graphs, your text content needs to present a clear narrative that can stand on its own. In other words, don't rely purely on your figures and tables to convey your point. Highlight the crucial trends and values in the body of the text. Figures and tables should complement your writing, not carry it. All right, so that covers the basics of descriptive statistics. Depending on your research aims, objectives and research questions, you may end your analysis here. However, if your study also requires inferential statistics, then it's time to get started on those. All right, on to inferentials. Unlike descriptive statistics where the focus is purely on the sample, inferential statistics are used to make predictions about the population. So this part of the results chapter is where things can get really interesting. Inferential methods, broadly speaking, can be broken up into two groups. First up, there are those analyses that compare measurements between groups, such as t-tests, which measure differences between two groups, and ANOVAs, which measure differences between multiple groups. For example, you could use ANOVA to assess the difference in average weight loss between three groups that adopted three different diets. The second type of inferential methods are those that assess relationships between variables, such as correlation analysis and regression analysis. For example, you could use correlation analysis to assess the relationship between the number of hours studied and test marks earned within a sample of students. Within each of these inferentials, some tests can be used for normally distributed data, in other words, symmetrical data. And some tests are designed specifically for use on non-normally distributed data. So it's important to make sure that you use the right analysis tool for your data shape. Remember, you would have assessed data shape in your descriptive statistics section, so make sure that you align your inferential approach with those findings. There are a seemingly endless number of analysis methods and tests that you can use to crunch your data, so it's easy to run down a rabbit hole and end up with piles of test data. Therefore, you need to be selective about which methods you use. Ultimately, the most important thing is to make sure that you adopt the analysis methods that allow you to achieve your research objectives and answer your research questions. So let those two factors guide you. As with the descriptive statistics, in this section of your results chapter, you should try to make use of figures and tables as effectively as possible. For example, if you present a correlation table, use color coding to highlight the significance of the correlation values, or present a scatter plot to visually demonstrate what the trend is. The easier you make it for your reader to digest your findings, the more effectively you'll be able to make your arguments in the next chapter. Right. With both your descriptive and inferential statistics presented, you should have now laid the foundation for your discussion chapter. So at this stage, it's a good idea to quickly revisit that list that you drew up in step one and make sure you've covered all the necessary data to support your research objectives and research questions. If you have, you're ready to take the next step. If your study requires it, the next component of your results chapter will be hypothesis testing. Not every study will need hypotheses, so don't feel like you need to shoehorn these in if you haven't got any. As with so many things in your dissertation, the need for hypotheses depends on your research aims, objectives, and questions. So what exactly is a hypothesis? Generally speaking, a hypothesis is a statement that expresses an expected difference between groups or a relationship between variables. Importantly, it's a statement that can be supported or rejected by a statistical test. In other words, it needs to be very specific and measurable and cannot leave any room for interpretation or subjectivity. For example, a statement like, there is a relationship between study hours and test marks scored could be supported or rejected by a statistical test. For example, correlation analysis. Contrasted to this, a statement like, ice cream is the meaning of life, couldn't be supported by a statistical test, as much as I wish it could be. So if your dissertation or thesis included the development of hypotheses in the literature review chapter, this section is where you'd present them once again and test them using your statistical data. If you want to learn more about hypotheses, check out our detailed post over on the Grad Coach blog. I'll include a link below this video. One last thing. If your research involved developing a theoretical framework, you can, at this stage, present that framework once again, this time populating it with the hypothesis testing data. For example, if you were developing a theoretical model of the antecedents of trust and your hypotheses related to the antecedent variables, you could present the model again, this time incorporating the test results for each variable. Right, with your quantitative analyses presented, it's time to wrap up your results chapter and transition to the discussion chapter. To conclude your results chapter, the final step is to provide a brief summary of the key findings. Brief is the key word here. Much like the chapter introduction, this shouldn't be lengthy, a paragraph or two maximum. In this section, you only need to highlight the findings most relevant to your research objectives and research questions so that you lay the foundation for the next chapter. Don't provide a lengthy recap of each and every section of results. Just remind the reader of the key takeaways and wrap it up. If you work through your results chapter step-by-step as we've discussed in this video, you should land up with a comprehensive presentation of your key data. Keep in mind that what we've discussed here is a generic structure. There's no one-size-fits-all. The exact structure and contents of your results chapter will be influenced by your specific research objectives and research questions. As I mentioned in step one, start by crafting a list that covers your research objectives and questions and mapping that to the various statistical analyses and tests you've undertaken. Revisit that list as you work through each section of your results chapter and you can rest assured that you'll be headed in a good direction. All right, so that wraps it up for today. If you enjoyed the video, hit the like button and please leave a comment if you have any questions. Also, be sure to subscribe for more research-related content. Remember, if you need a helping hand with your research, be sure to check out our private coaching service where we work with you on a one-on-one basis, chapter-by-chapter, to help you craft a winning dissertation, thesis, or research project. If that sounds interesting to you, book a free consultation with a friendly coach at www.gradcoach.com. As always, I'll include a link below. That's all for this episode of Grad Coach TV. Until next time, good luck. Grad Coach

techradar

  • Open access
  • Published: 26 August 2024

Inter-laboratory comparison of eleven quantitative or digital PCR assays for detection of proviral bovine leukemia virus in blood samples

  • Aneta Pluta 1 , 13 ,
  • Juan Pablo Jaworski 2 ,
  • Casey Droscha 3 ,
  • Sophie VanderWeele 3 ,
  • Tasia M. Taxis 4 ,
  • Stephen Valas 5 ,
  • Dragan Brnić 6 ,
  • Andreja Jungić 6 ,
  • María José Ruano 7 ,
  • Azucena Sánchez 7 ,
  • Kenji Murakami 8 ,
  • Kurumi Nakamura 8 ,
  • Rodrigo Puentes 9 ,
  • MLaureana De Brun 9 ,
  • Vanesa Ruiz 2 ,
  • Marla Eliana Ladera Gómez 10 ,
  • Pamela Lendez 10 ,
  • Guillermina Dolcini 10 ,
  • Marcelo Fernandes Camargos 11 ,
  • Antônio Fonseca 11 ,
  • Subarna Barua 12 ,
  • Chengming Wang 12 ,
  • Aleksandra Giza 13 &
  • Jacek Kuźmak 1  

BMC Veterinary Research volume  20 , Article number:  381 ( 2024 ) Cite this article

154 Accesses

1 Altmetric

Metrics details

Bovine leukemia virus (BLV) is the etiological agent of enzootic bovine leukosis and causes a persistent infection that can leave cattle with no symptoms. Many countries have been able to successfully eradicate BLV through improved detection and management methods. However, with the increasing novel molecular detection methods there have been few efforts to standardize these results at global scale. This study aimed to determine the interlaboratory accuracy and agreement of 11 molecular tests in detecting BLV. Each qPCR/ddPCR method varied by target gene, primer design, DNA input and chemistries. DNA samples were extracted from blood of BLV-seropositive cattle and lyophilized to grant a better preservation during shipping to all participants around the globe. Twenty nine out of 44 samples were correctly identified by the 11 labs and all methods exhibited a diagnostic sensitivity between 74 and 100%. Agreement amongst different assays was linked to BLV copy numbers present in samples and the characteristics of each assay (i.e., BLV target sequence). Finally, the mean correlation value for all assays was within the range of strong correlation. This study highlights the importance of continuous need for standardization and harmonization amongst assays and the different participants. The results underscore the need of an international calibrator to estimate the efficiency (standard curve) of the different assays and improve quantitation accuracy. Additionally, this will inform future participants about the variability associated with emerging chemistries, methods, and technologies used to study BLV. Altogether, by improving tests performance worldwide it will positively aid in the eradication efforts.

Peer Review reports

Introduction

Bovine leukemia virus (BLV) is a deltaretrovirus from the Orthoretrovirinae subfamily of the Retroviridae family. An essential step in the BLV replication cycle is the integration of DNA copy of its RNA genome into the DNA of a host cell [ 1 ]. Once integrated, the proviral DNA is replicated along with the host’s DNA during cellular divisions, as for any cellular gene. The BLV is the etiologic agent of enzootic bovine leukosis (EBL). BLV causes a persistent infection in cattle, and in most cases this infection is asymptomatic [ 2 ]. In one-third of infected animals the infection progresses to a state of persistent lymphocytosis, and in 1 to 10% of infected cattle it develops into lymphosarcoma [ 2 ]. BLV induces high economic losses due to trade restrictions, replacement cost, reduced milk production, immunosuppression, and increased susceptibility to pneumonia, diarrhea, mastitis, and so on [ 3 , 4 , 5 , 6 ]. BLV is globally distributed with a high prevalence, except for Western Europe and Oceania, where the virus has been successfully eradicated through detection and elimination of BLV-infected animals [ 7 , 8 ]. The agar gel immunodiffusion and ELISA for the detection of BLV-specific antibodies in sera and milk are the World Organization for Animal Health (WOAH, founded as OIE) prescribed tests for serological diagnosis but ELISA, due to its high sensitivity and ability to test many samples at a very low cost, is highly recommended [ 9 ]. Despite the advantages of serologic testing, there are some scenarios in which direct detection of the BLV genomic fragment was important to improve BLV detection. The most frequent cases is the screening of calves with maternal antibodies, acute infection, animals without persistent antibody response and animal subproducts (i.e., semen). In this regard, nucleic acid amplification tests such as real-time quantitative PCR (qPCR) allows for a rapid and highly sensitive detection of BLV proviral DNA (BLV DNA) that can be used to test infected and asymptomatic animals, before the elicitation of anti-BLV specific antibodies and when proviral load (PVL) are still low [ 10 ]. Furthermore, qPCR assays can serve as confirmatory tests for the clarification of inconclusive and discordant serological test results usually associated with these cases [ 11 ]. For these reasons, the inclusion of qPCR in combination with other screening tests might increase control programs efficiency. Additionally, qPCR allows the estimation of BLV PVL which is important for studying the dynamics of BLV infection (i.e., basic research). Further, considering that BLV PVL correlates with the risk of BLV transmission, this feature of qPCR can be exploited for developing rational segregation programs [ 12 , 13 ]. The results of Kobayashi et al. suggest that high PVL is also a significant risk factor for progression to EBL and should therefore be used as a parameter to identify cattle for culling from the herd well before EBL progression [ 14 ]. Several qPCRs have been developed globally for the quantitation of BLV DNA. Although most assays have been properly validated by each developer, a proper standardization and harmonization of such tests is currently lacking. Considering that standardization and harmonization of qPCR methods and results are essential for comparisons of data from BLV laboratories around the world, this could directly impact international surveillance programs and collaborative research. We built a global collaborative network of BLV reference laboratories to evaluate the interlaboratory variability of different qPCRs and sponsored a harmonization of assays to hopefully impact international surveillance programs and research going forward.

In 2018 we conducted the first global trial of this kind to assess the interlaboratory variability of six qPCRs for the detection of BLV DNA [ 15 ]. Since this complex process is a continuous rather than a one-time effort, we now started a second study of this type. In this follow up study, we built a more comprehensive sample panel, accounting for a broader geographical diversification. Additionally, we increased the number of participants to ten collaborating laboratories plus one WOAH reference lab and tested novel methodologies including digital PCR (ddPCR) and FRET-qPCR. Finally, we established the next steps towards the international standardization of molecular assays for the detection of BLV DNA.

Materials and methods

Participants.

The eleven laboratories that took part in the study were:(i) the Auburn University College of Veterinary Medicine (Auburn, Alabama, United States): (ii) AntelBio, a division of CentralStar Cooperative (Michigan, United States); (iii) Laboratórios Federais de Defesa Agropecuária de Minas Gerais (LFDA-MG, Pedro Leopoldo, Brasil); (iv) Centro de Investigación Veterinaria de Tandil (CIVETAN, Buenos Aires, Argentina); (v) the Faculty of Agriculture Iwate University (Iwate, Japan); (vi) Universidad de la República de Uruguay (UdelaR, Montevideo, Uruguay); (vii) the Croatian Veterinary Institute (Zagreb, Croatia); (viii) Instituto Nacional de Tecnología Agropecuaria (INTA, Buenos Aires, Argentina); (ix) Laboratorio Central de Veterinaria (LCV, Madrid, Spain); (x) the National Veterinary Research Institute (NVRI, Puławy, Poland) and (xi) the French Agency for Food, Environmental and Occupational Health and Safety (Anses, Niort, France). All European laboratories participating in this study are acting as national reference laboratories for EBL, NVRI acts as WOAH reference laboratory for EBL, while the remaining laboratories are nationally renowned entities for BLV diagnostics. The eleven participating methods are referred to below as qPCR1 – qPCR5, ddPCR6, qPCR7 – qPCR11, respectively.

Sample collection and DNA extraction

A total of 42 DNA samples obtained from blood of naturally BLV-infected dairy cattle from Poland, Moldova, Pakistan, Ukraine, Canada and United States were used for this study. Thirty-six of them were archival DNA samples obtained between 2012–2018 as described in our previous studies on samples from Poland ( n  = 21) [ 16 , 17 ], Moldova ( n  = 4) [ 18 ], Pakistan ( n  = 5) [ 19 ] and Ukraine ( n  = 6) [ 15 , 20 ]. Between 2020–2021 6 peripheral blood and serum samples from naturally BLV-infected cattle were obtained from three dairy farms of Alberta, Canada and two dairy farms of Michigan, US. Serological testing and sample processing were conducted by the laboratories from which the samples originated. The genomic DNA from Canadian and US samples was extracted from whole blood using a Quick DNA Miniprep Plus kit (Zymo Research) and a DNeasy Blood & Tissue Kit (Qiagen), respectively in University of Calgary and Michigan State University and sent to the NVRI in the form of DNA solutions. Additionally, one plasmid DNA sample (pBLV344) was kindly supplied by Luc Willems (University of Liège, Belgium) and DNA extracted from FLK-BLV cells were included as positive controls. Finally, DNA extracted from PBL of a serologically negative cattle was included as negative control. At the NVRI, the DNA concentration in all samples was estimated by spectrophotometry using a NanoPhotometer (Implen). Each sample was divided into eleven identical aliquots containing between 800 and 4,000 ng of lyophilised genomic DNA. Eleven identical sets of these samples were lyophilized (Alpha 1–4 LSC basic, Martin Christ Gefriertrocknungsanlagen GmbH) and distributed to participating laboratories. At the NVRI, all samples were coded (identification [ 21 ] run numbers 1 to 44) to perform a blinded testing. The samples, together with instructions for their preparation (Additional file 1), were shipped by air at room temperature (RT).

Examination of DNA quality/stability

Since different extraction methods and lyophilization process were employed for the preparation of the DNA samples, it was necessary to test the quality of the DNA at the NVRI laboratory. For that purpose, one complete set of samples ( n  = 44) was tested by Fragment Analyzer (Agilent Technologies), before and after freeze-drying, to assess DNA quality by calculating a Genomic Quality Number (GQN) for every sample. Low GQN value (< 2.5) represents sheared or degraded DNA. A high GQN (> 9) represents undegraded DNA. In addition, quality of DNA was assessed by determination of copy number of the histone H3 family 3A ( H3F3A ) housekeeping gene using quantitative real-time PCR (qPCR) [ 22 ]. The qPCR results were expressed as the number of H3F3A gene copies per 300 ng of DNA in each sample. Grubbs´ test was performed to determine outliers. To test the stability of DNA, samples were stored for 20 days at RT (10 days) and at + 4 °C (10 days) and were retested by Fragment Analyzer and qPCR 21 days later. A Mann–Whitney U-test was used to compare the median values between fresh and stored samples (time 0 and time 1), respectively.

Description of BLV qPCR protocols used by participating laboratories

All participating laboratories performed their qPCR or ddPCR using a variety of different equipment, reagents, and reaction conditions, which had been set up, validated, and evaluated previously and are currently used as working protocols. The specific features of each of these protocols are described below and summarized in Table  1 .

All laboratories applied standard procedures for avoiding false-positive results indicative of DNA contamination, such as the use of separate rooms for preparing reaction mixtures, adding the samples, and performing the amplification reaction. One of the ten BLV qPCRs used LTR region and the remaining nine qPCRs used the pol gene as the target sequence for amplification, while the ddPCR amplified the env gene.

Method qPCR1

The BLV qPCR amplifying a 187-bp pol gene was performed according to a previously published methods [ 23 , 24 ]. A real-time fluorescence resonance energy transfer (FRET) PCR was carried out in a 20-μl PCR mixture containing 10 μl handmade reaction master mix and 10 μl genomic DNA. The PCR buffer was 4.5 mM MgCl2, 50 mM KCl, 20 mM Tris–HCl, pH 8.4, supplemented with 0.05% each Tween20 and Non-idet P-40, and 0.03% acetylated BSA (Roche Applied Science). For each 20 μl total reaction volume, the nucleotides were used at 0.2 mM each and 1.5 U Platinum Taq DNA polymerase (Invitrogen, Carlsbad, CA, USA) was used. Primers were used at 1 μM, LCRed640 probe was used at 0.2 μM, and 6-FAM probe was used at 0.1 μM. Amplification was performed in the Roche Light Cycler 480 II (Roche Molecular Biochemicals) using 10 min denaturation step at 95 °C, followed by 18 high-stringency step-down thermal cycles and 30 low-stringency fluorescence acquisition cycles.

A plasmid containing the BLV-PCR amplicon region was diluted ten-fold from 1 × 10 5 copies to 10 copies per 10 µl and was used as a standard to measure the BLV copy numbers.

Method qPCR2

A BLV proviral load qPCR assay developed by AntelBio, a division of CentralStar Cooperative Inc. on Applied Biosystems 7500 Real-Time PCR system [ 25 , 33 ]. This multiplex assay amplifies the BLV pol gene along with the bovine β-actin gene and an internal amplification control, “Spike”. A quantitative TaqMan PCR was carried out in a 25-μl PCR mixture containing 12.5 µl of 2X InhibiTaq Multiplex HotStart qPCR MasterMix (Empirical Bioscience), 16 nM each BLV primer, 16 nM each β-actin primer, 8 nM each spike primer, 8 nM BLV FAM-probe, 8 nM β-actin Cy5-probe, 4 nM spike JOE-probe, 1 µl of an internal spike-in control (10,000 copies per µl), 7.25 µl of nuclease-free water and 4 µl of DNA sample for each qPCR reaction. The thermal PCR protocol was as follows: 95 °C for 10 min, 40 × (95 °C for 15 s, 60 °C for 1 min). Copy numbers of both the BLV pol gene and bovine β-Actin were derived using a plasmid containing target sequences, quantified by ddPCR, diluted 1 × 10 6 copies per µl to 10 copies per µl in tenfold dilutions. DNA concentrations of each sample were measured using a Qubit 4 Fluorometer and used in combination with the qPCR copy numbers to calculate BLV copies per 100 ng.

Method qPCR3

The qPCR assays for the BLV LTR gene were performed according to a previously published methods [ 26 ]. Genomic DNA was amplified by TaqMan PCR with 10 μl of GoTaq Probe qPCR Master Mix × 2 (Promega), 0.6 pmol/μl each primer, 0.3 pmol/µl double-quenched probe and 100 ng genomic DNA. Amplification was performed in the CFX96 cycler (BioRad) according to the protocol: 5 min denaturation at 95°C followed by 45 cycles (60 s at 94°C and 60 s at 60°C). The efficiency of each reaction was calculated from the serial dilution of DNA extracted from BLV persistently infected fetal lamb kidney (FLK) cells, starting at a concentration of 100 ng/µl [ 21 ]. The detection limit was tested using a plasmid containing the target of the qPCRs, starting at 10 3 ng/µl.

Method qPCR4

The quantitative real-time PCR was done with the primers for the BLV pol gene as previously described [ 34 ]. The qPCR reaction mix contained 1 × PCR Master Mix with SYBR Green (FastStart Universal SYBR Green Master Rox, Roche), 0.3 μM each primer and 30 ng of extracted genomic DNA. Amplification was performed in QuantStudio 5 Real-Time PCR System (Applied Biosystems) under the following conditions: 2 min at 50 °C, 10 min at 95 °C, 40 cycles of 15 s at 95 °C and 60 s at 60 °C. A standard curve of six tenfold serial dilutions of pBLV, containing 1 × 10 6 to 10 BLV copies, was built and run 3 times for validation of the method. The number of provirus copies per reaction (100 ng) was calculated.

Method qPCR5

BLV PVLs were determined by using qPCR kit, RC202 (Takara Bio, Shiga, Japan) [ 28 , 35 ]. This qPCR assay amplifies the BLV pol gene along with the bovine RPPH1 gene as an internal control. Briefly, 100 ng genomic DNA was amplified by TaqMan PCR with four primers for pol gene and RPPH1 gene according to the manufacturer’s instructions: 30 s denaturation at 95 °C followed by 45 cycles (5 s at 95 °C and 30 s at 60 °C). The qPCR was performed on a QuantStudio 3 Real-Time PCR System (Thermo Fisher Scientific K.K., Tokyo, Japan). Standard curve was generated by creating tenfold serial dilutions of the standard plasmid included in the kit. The standards for calibration ranged from 1 to 10 6 copies/reaction and were run in duplicate. The number of provirus copies per 100 ng was calculated.

Method ddPCR6

The digital droplet PCR (ddPCR) assay for the env gene of the BLV was performed using the protocol previously described by [ 28 , 29 ]. An absolute quantification by TaqMan ddPCR was performed in a typical 20-μl assay, 1 μl of DNA sample was mixed with 1 μl of each primer (10 μM), 0.5 μl of probe (10 μM), and 2 × Supermix emulsified with oil (Bio-Rad). The droplets were transferred to a 96-well plate (Eppendorf). The PCR assay was performed in a thermocycler (C1000 touch cycler; Bio-Rad) with the following parameters: initial denaturation of 10 min at 95 °C, then 40 cycles of 30 s at 94 °C, and 1 min at 58 °C, with final deactivation of the enzyme for 10 min at 98 °C. The presence of fluorescent droplets determined the number of resulting positive events that were analyzed in the software (QuantaSoft v.1.7.4; Bio-Rad), using dot charts. The number of provirus copies per 100 ng were calculated. Each sample was run in duplicate, and results were averaged.

Method qPCR7

This qPCR method for the BLV pol gene is a modified option of widely available quantitative TaqMan qPCR described by Rola-Łuszczak et al. [ 11 ], using the same primers and standards. A quantitative TaqMan PCR was performed in a 20 μl PCR mix containing 10 μl of 2 × ORA qPCR Probe ROX L Mix (highQu, Kraichtal, Germany), 2 μl primer/probe mix (final concentration 400 nM of each of the primers, 200 nM of BLV probe), and 3 μl extracted genomic DNA. Amplification was performed in the Rotor-Gene Q system (Qiagen) with an initial denaturation step and polymerase activation at 95 °C for 3 min, followed by 45 cycles of 95 °C for 5 s and 60 °C for 30 s. As a standard, plasmid pBLV1 (NVRI, Pulawy, PL) containing a BLV pol fragment was used. Tenfold dilutions of plasmid DNA were made from 1 × 10 10 copies to 1 × 10 1 copies per reaction and used to generate the standard curve and estimate BLV copy number per 100 ng.

Method qPCR8

Proviral load quantification was assessed by SYBR Green real-time quantitative PCR (qPCR) using the pol gene as the target sequence [ 36 ]. Briefly, 12-μl PCR mixture contained Fast Start Universal SYBR Green Master Mix (Roche), 800 nM each BLV pol primers and 1 µl DNA as template. The reactions were incubated at 50 °C for 2 min and 95 °C for 10 min, followed by 40 cycles at 95 °C for 15 s, 55 °C for 15 s and 60 °C for 1 min. All samples were tested in duplicate on a StepOne Plus machine (Applied Biosystems). A positive and negative control, as well as a no-template control, were included in each plate. After the reaction was completed, the specificity of the amplicons was checked by analyzing the individual dissociation curves. As a standard, plasmid pBLV1 (NVRI, Pulawy, PL) containing a BLV pol fragment was used. Tenfold dilutions of plasmid DNA were made from 1 × 10 6 to 10 copies per µl and used to generate the standard curve and estimate BLV copy number per 100 ng.

Method qPCR9

This qPCR method is a modified option of widely available quantitative TaqMan qPCR described by Rola-Łuszczak et al. [ 11 ], using the same primers and standards. The detection of BLV genome was combined with an endogenous control system (Toussaint 2007) in a duplex assay. Briefly, 20-µl qPCR reaction contained AhPath ID™ One-Step RT-PCR Reagents with ROX (Applied Biosystems, CA, USA) – 10 µl of 2 × RT-PCR buffer and 0.8 µl of 25 × RT-PCR enzyme mix, 400 nM each primer for pol gene, 100 nM BLV specific probe, 40 nM each β-actin primer, 40 nM β-actin specific probe and 2 µl DNA sample. All samples were tested in ABI7500 Real-Time PCR System (Applied Biosystems) according to the following protocol: 10 min at 48 °C (reverse transcription), 10 min at 95 °C (inactivation reverse transcriptase / activation Taq polymerase) followed by 45 cycles (15 s at 95 °C and 60 s at 60 °C). As a standard, plasmid pBLV1 (NVRI, Pulawy, PL) containing a BLV pol fragment was used. Tenfold dilutions of plasmid DNA were made from 1 × 10 4 copies to 0.1 copies per μl and used to generate the standard curve and estimate BLV copy number per 100 ng.

Method qPCR10

The BLV qPCR was performed as published previously [ 11 ]. A quantitative TaqMan PCR was carried out in a 25-μl PCR mixture containing 12.5 μl of 2 × QuantiTect Multiplex PCR NoROX master mix (Qiagen), 0.4 μM each primer, 0.2 μM specific BLV probe, and 500 ng of extracted genomic DNA. Amplification was performed in the Rotor-Gene Q system (Qiagen) using an initial denaturation step and polymerase activation at 95 °C for 15 min, followed by 50 cycles of 94 °C for 60 s and 60 °C for 60 s. All samples were amplified in duplicate. As a standard, the pBLV1 plasmid (NVRI, Pulawy, PL), containing a 120-bp BLV pol fragment, was used. Tenfold dilutions of this standard were made from 1 × 10 6 copies per μl to 100 copies per μl and were used to estimate the BLV copy numbers per 100 ng.

Method qPCR11

This qPCR method for the BLV pol gene is a modified option of widely available quantitative TaqMan qPCR described by Rola-Łuszczak et al. [ 11 ], using the same primers and standards. The reaction mixture contained 400 nM of each primer, 200 nM of probe, 10 µl of 2 × SsoFast probes supermix (Bio-Rad), 5 µl of DNA sample and H 2 O up to 20 µl of the final volume. PCR assays were carried out on a CFX96 thermocycler (Bio-Rad) under the following amplification profile: 98 °C for 3 min, followed by 45 cycles of 95 °C for 5 s and 60 °C for 30 s. As a standard, plasmid pBLV1 (NVRI, Pulawy, PL) containing a BLV pol fragment was used. Tenfold dilutions of plasmid DNA were used to generate the standard curve and estimate BLV copy number per 100 ng.

Analysis of BLV pol, env and LTR sequences targeted by particular qPCR/ddPCR assays

In order to assess full-length pol , env and LTR sequence variability among BLV genotypes, all BLV sequences ( n  = 2191) available on 30 September 2023 in GenBank ( https://www.ncbi.nlm.nih.gov/GenBank/ ) repository were retrieved. From the collected sequences, 100 pol , env and LTR sequences, which were characterized by the highest level of sequence variability and divergence, were selected for the further analysis. A pol -based, env -based and LTR-based maximum likelihood (ML) phylogenetic trees (see Additional file 6) was constructed to assign genotypes to the unassigned BLV genomes [ 37 , 38 , 39 ]. For all genes and LTR region the Tamura-Nei model and Bootstrap replications (1,000) were applied. In this analysis, pol sequences were assigned to 7 BLV genotypes (G1, G2, G3, G4, G6, G9, and G10), while env and LTR sequences were assigned to 10 BLV genotypes (G1, G2, G3, G4, G5, G6, G7, G8, G9, and G10). Phylogeny of the same isolates assigned to particular genotypes by ML method was confirmed by Mr. Bayes analysis [ 40 , 41 , 42 ] (data not shown). From this analysis, a total of 100 full-length pol, env and LTR sequences were used for multiple-sequence alignment (MSA) using ClustalW algorithm, implemented in MEGA X. For all sequences, nucleotide diversity (π), defined as the average number of nucleotide differences per site between two DNA sequences in all possible pairs in the sample population, was estimated using MEGA X. To measure the relative variation in different positions of aligned genes and LTR region the Shannon’s entropy (a quantitative measure of diversity in the alignment, where H = 0 indicates complete conservation) was estimated using BioEdit v. 7.2.5 software 64. The statistical analyses were performed using DATAtab e.U. Graz, Austria and GraphPad Software by Dotmatics, Boston.

Examination of the quality and stability of DNA samples

To test the quality of DNA samples, the H3F3A copy number of each individual sample was assessed by qPCR at the NVRI. Copy numbers were normalized to DNA mass input and results were expressed as copy numbers per 300 ng of total DNA. The respective values were tested by Grubbs' test. The results for 43 DNA samples (sample ID: 42 with BLV genome plasmid was excluded) followed a normal distribution (Shapiro–Wilk 0.97; P  = 0.286), with a mean value of 35,626 copies (95% confidence interval [ 43 ] 33,843 to 37,408 copies), a minimum value of 19,848 copies and a maximum value of 46,951 copies (see Additional file 2). Despite a low value for sample ID: 40 no significant outlier was detected in the dataset ( P  > 0.05). Therefore, it can be assumed that the DNA quality was acceptable for all samples present in the panel. Next, DNA stability was assessed by retesting the H3F3A copy numbers in each sample ( n  = 43) after a combined storage consisting in 10 days at RT and 10 days at + 4°C. A Mann–Whitney U-test was used to compare the median values between fresh and stored samples (time 0 and time 1, respectively), and no significant difference was observed at the 5% level ( P  = 0.187) (Fig.  1 A).

figure 1

Assessment of the stability of DNA samples. A Shown are copy numbers of the H3F3A housekeeping gene in 43 DNA samples that were stored in 10 days at RT and 10 days at + 4°C and tested twice with a 21-day interval. A Mann–Whitney U-test was used to compare the median values between two groups ( P  = 0.187); B Shown are GQN values ( n  = 43) tested twice with a 21-day interval: `before freeze-drying` and `after freeze-drying`. A Mann–Whitney U-test results between two groups ( P  = 0.236)

In addition, the quality of DNA samples after lyophilization was analyzed. DNA from individual samples ( n  = 43) was assessed with the genomic DNA quality number on the Fragment Analyzer system. The GQN from all lyophilized samples ranged from 4.0 to 9.7—that represented undegraded DNA. There was no significant difference in GQN values between `before freeze-drying` and `after freeze-drying` groups with respect to the corresponding DNA samples ( P  = 0.236) (Fig.  1 B). Altogether, these results suggested that sample storage, lyophilization and shipping has a minimal impact in DNA stability and further testing during the interlaboratory trial.

Detection of BLV proviral DNA by different qPCR assays

A total of 44 DNA samples, including two positive (ID: 42 and 43) and one negative (ID: 32) controls, were blinded and independently tested by eleven laboratories using their own qPCR methods (Table  2 ). All laboratories measured the concentration of DNA in samples (Additional file 3). BLV provirus copy number was normalized to DNA concentration and expressed per 100 ng of genomic DNA for each test.

Except for the positive (pBLV344 and FLK cell line) and the negative controls, all samples had previously shown detectable levels of BLV-specific antibodies (BLV-Abs) by enzyme-linked immunosorbent assays (ELISA). During the current interlaboratory study, both the positive and negative controls were assessed adequately by all eleven PCR tests. Of all 43 positive samples, 43, 35, 37, 36, 40, 32, 40, 42, 42, 42 and 41 samples were detected as positive by the qPCR1, qPCR2, qPCR3, qPCR4, qPCR5, ddPCR6, qPCR7, qPCR8, qPCR9, qPCR10 and qPCR11 methods, respectively. Based on these observations, the most sensitive method was the qPCR1, and the method with the lowest sensitivity was the ddPCR6. Twenty-nine out of 44 samples were identified correctly by all qPCRs. The remaining 15 samples gave discordant results. Comparison of qualitative results (positive versus negative) from all eleven methods revealed 87.33% overall agreement and a kappa value of 0.396 (Cohen's kappa method adapted by Fleiss) [ 44 , 45 ]. The levels of agreement among the results from the eleven methods are represented in Table  3 . The maximum agreement was seen between two methods (qPCR9 and qPCR10 [100% agreement and a Cohen's kappa value of 1.000]) that used similar protocols and targeted the same region of BLV pol .

Analysis of BLV pol, env and LTR sequences targeted by particular PCR assays

Due to differences in performance observed among the pol -based qPCR assays (the qPCR1, qPCR2, qPCR4, qPCR5 and qPCR7- qPCR11 methods), and considering that the env -based ddPCR6 and LTR-based qPCR3 assay showed the lowest sensitivity and the poorest agreement with the other assays, the degree of sequence variability between the pol , env and LTR genes was addressed. From the MSAs for pol , env and LTR, the nucleotide diversity (π) was calculated. The π value for pol gene was lower than that for LTR and env gene (π pol , 0.023 [standard deviation {SD}, 0.018]; π LTR , 0.024 [SD, 0.011]; π env , 0.037 [SD, 0.013]). From this analysis, pol sequences appeared to be less variable than env and LTR sequences. In addition, we performed a Shannon entropy-based per-site variability profile of the pol , env and LTR sequences used in this study (Fig.  2 A-C).

figure 2

Sequence variability measured as per-site entropy. A Multiple alignment of the pol gene showing the locations of qPCR fragments in regions of the pol gene for the qPCR1 (highlighted in pink), qPCR4 (highlighted in yellow) and for the qPCR7, qPCR8, qPCR9, qPCR10 and qPCR11 assays (highlighted in orange). B Multiple alignment of the env gene targeted by ddPCR6 (highlighted by blue rectangle). C Multiple alignment of the LTR region by qPCR3 (highlighted in mint)

The all-observed entropy plots were homogeneous along the whole sequences. Considering the three regions of pol gene, the highest entropy (4.67) occurred in the region targeted by the qPCR1 primers, whereas the entropy for qPCR7—qPCR11 and qPCR4 primers were 1.57 and 0.38, respectively. For the LTR region targeted by qPCR3 primers and for env gene targeted by ddPCR6, the total entropy was equal to 4.46 and 7.85, respectively. This analysis showed a marked region of variability for LTR and env fragments. Interestingly, we noted that the qPCR7—qPCR11 targeted the most conserved regions of reverse transcriptase and qPCR4 primers targeted the most-conserved region of virus integrase (Fig.  2 A-C; see also Additional file 7).

Quantitation of BLV proviral DNA by different qPCR/ddPCR assays

To analyze whether the range of copy numbers detected by each qPCR was comparable to those of the others, Kruskal–Wallis one-way analysis of variance (ANOVA) was used. The violin plots were used to visualize the ANOVA results (Fig.  3 A-B).

figure 3

Comparison of detection of BLV proviral DNA copy numbers by eleven testing methods. Shown is a box plot of data from Kruskal–Wallis ANOVA, a rank test. The DNA copy numbers for 41 samples, determined independently by each of the 11 qPCRs, were used for the variance analysis. In this analysis, the positive controls (sample ID 42 and ID 43) and negative control (sample ID 32) were excluded. A Violin plot for graphical presentation of the ANOVA of proviral copy number values. B Violin plot for ANOVA analysis of variance, copy number values are presented on a logarithmic scale (Log1.2) for better illustration of copy number differences between PCR methods

The grouping variable revealed significant differences among the distributions of proviral DNA copy numbers with the various qPCRs ( P  < 0.001). These results showed that the abilities of qPCRs/ddPCR to determine the proviral DNA copy number differed. A Dunn-Bonferroni test was used to compare the groups in pairs to find out which was significantly different. The Dunn-Bonferroni test revealed that the pairwise group comparisons of qPCR2—qPCR4, qPCR3—ddPCR6, qPCR4—qPCR5, qPCR4—ddPCR6, qPCR4—qPCR9, qPCR4—qPCR10, qPCR5—qPCR11, ddPCR6—qPCR11 and qPCR9—qPCR11 have an adjusted P value less than 0.05 and thus, it can be assumed that these groups were significantly different in each pair (see Additional file 4). The Pareto chart was used to show the average copy number values of all methods in descending order. These Pareto charts were prepared based on 80–20 rule, which states that 80% of effects come from 20% of the various causes [ 46 ]. The methods that generated the highest copy numbers was qPCR3 and qPCR4, on the other hand the lowest copy numbers and/or highest negative results were generated by ddPCR6 (Fig.  4 ).

figure 4

A Pareto chart with the proviral BLV copy mean values for eleven PCR assay arranged in descending order. Pareto charts was prepared based on 80–20 rule, which states that 80% of effects come from 20% of the various causes

The correlations between copy numbers detected by different qPCRs and ddPCR assays were calculated. The Kendall's Tau correlation coefficient measured between each pair of the assays was shown in the Additional file 5 and in Fig.  5 as a correlation heatmap. The average correlation for all qPCRs and ddPCR assays was strong (Kendall's tau = 0.748; P  < 0.001).

figure 5

The heatmap of Kendall’s tau correlation coefficients between copy numbers detected by ten qPCRs and one ddPCR. Statistically significant differences in the distribution of copy numbers, a moderate, strong and very strong correlation between particular qPCRs/ddPCR was observed. The strength of the association, for absolute values of r, 0–0.19 is regarded as very weak, 0.2–0.39 as weak, 0.40–0.59 as moderate, 0.6–0.79 as strong and 0.8–1 as very strong correlation

Since the differences between PCR tests may be influenced by the number of BLV proviral copies present in each sample, we compared the average number of BLV copies between a group of genomic DNA samples that gave concordant results (group I [ n  = 28]) and a group that gave discordant results (group II [ n  = 15]). The mean number of copies was 73,907 (minimum, 0; maximum, 4,286,730) in group I, and 3,479 (minimum, 0; maximum, 218,583) in group II, and this difference was statistically significant ( P  < 0.001 by a Mann–Whitney U- test) (Fig.  6 ).

figure 6

Impact of BLV proviral copy numbers on the level of agreement. Violin plot for graphical presentation of Mann–Whitney U test. The test was performed to compare BLV provirus copy number in two groups of samples: 28 samples with fully concordant results from all eleven qPCR/ddPCR assays (left) and 15 samples with discordant results from different qPCR/ddPCR assays (right) ( P  < 0.001). Sample ID 42 was excluded from the statistical analysis

The results show that the concordant results group had considerably higher copy numbers (median, 5,549.0) than the discordant results group (median, 6.3).

BLV control and eradication programs consist of correct identification and subsequent segregation/elimination of BLV-infected animals [ 47 ]. Detection of BLV- infected cows by testing for BLV-specific antibodies in serum by agar gel immunodiffusion and ELISA is the key step and standard to be implemented of EBL eradication programs according to WOAH ( https://www.woah.org/en/disease/enzootic-bovine-leukosis/) [ 9 ]. Despite the low cost and high throughput of serological tests, there are several scenarios where highly specific and sensitive molecular assays for the detection of BLV DNA might improve detection and program efficiency.

In this perspective, qPCR assays can detect small quantities of proviral DNA during acute infection, in which animals show very low levels of anti-BLV antibodies [ 43 , 48 , 49 , 50 ]. qPCR methods can also work as confirmatory tests to clarify ambiguous and inconsistent serological test results [ 11 ]. Such quantitative features of qPCRs are crucial when eradication programs progress and prevalence decreases. Moreover, qPCR allows not only the detection of BLV infection but also estimation of the BLV PVL, which directly correlates with the risk of disease transmission [ 51 , 52 ]. This feature of qPCR allows for a rational segregation of animals based on the stratified risk of transmission. These considerations allow for greater precision in the management of BLV within large herds with a high prevalence of BLV ELISA-positive animals to effectively reduce herd prevalence [ 13 , 53 ]. BLV is a global burden and the lack of technical standardization of molecular detection systems remains a huge obstacle to compare surveillance data globally based on the first interlaboratory trial performed in 2018 [ 15 ]. In the 2018 study we observed an adjusted level of agreement of 70% comparing qualitative qPCR results; however, inconsistencies amongst methods were larger when low number of copies of BLV DNA were compared. Samples with low copies of BLV DNA (< 20 copies per 100 ng) accounted for the higher variability and discrepancies amongst tests. We concluded from the first interlaboratory trial that standardizing protocols to improve sensitivity of assays with lower detection rates was necessary.

In this follow up study, we re-tested the TaqMan BLV qPCR developed and validated by NVRI (acting as reference WOAH laboratory) and the one adapted from this original protocol to be used with SYBR Green dye, allowing a significant reduction in costs [ 11 ]. Another 3 laboratories also performed NVRI´s qPCR with slight modifications (i.e., Spain performed a multiplex assay for internal normalization). The remaining 6 labs introduced novel methodologies to the trial including one ddPCR (UY).

To compare different qPCR methods, a more comprehensive sample panel, accounting for a more geographical diversification was used in this trial. The amounts of BLV DNA in these samples were representative of the different BLV proviral loads found in field samples (from 1 to > 10,000 copies of BLV proviral DNA). Of note, 34% of reference samples had less than 100 copies of BLV DNA per 100 ng; samples were lyophilized to grant better preservation and reduced variability during distribution to participants around the globe.

The panel included a single negative control and two positive controls. Diagnostic sensitivity (DxSn) was estimated for each qPCR. Considering the 43 positive samples, the DxSn for the different qPCRs were: qPCR1 = 100%, qPCR2 = 82%, qPCR3 = 86%, qPCR4 = 84%, qPCR5 = 93%, ddPCR6 = 74%, qPCR7 = 93%, qPCR8 = 98%, qPCR9 = 98%, qPCR10 = 98% and qPCR11 = 95%. The most sensitive method was the qPCR1, and the method with the lowest sensitivity was the ddPCR6 method. Twenty-nine out of 44 samples were identified correctly by all qPCRs. The remaining 15 samples gave discordant results. The comparison of qualitative qPCR results among all raters revealed an overall observed agreement of 87%, indicating strong interrater reliability (Cohen´s kappa = 0.396) [ 54 , 55 ].

There are several factors that contribute to variability in qPCR results (i.e., number of copies of target input, sample acquisition, processing, storage and shipping, DNA purification, target selection, assay design, calibrator, data analysis, etc.). For that reason and as expected, the level of agreement among sister qPCRs (qPCR7, qPCR9-11) sharing similar protocols was higher compared to the rest of assays; this was also true for qPCR8 which targets the same region of BLV pol gene (shares same primers) but has a particular set-up to be used with SYBR Green chemistry. Oppositely, lower sensitivity and larger discrepancy against other tests was observed for the ddPCR6 and qPCR2-4.

Based on these observations we investigated which factors might have accounted for larger assessment variability amongst tests. In the first place, we observed that the use of different chemistries was not detrimental for the sensitivity and agreement among tests; similar DxSn and comparable level of agreement were obtained comparing TaqMan (qPCR7, 10, 11) vs SYBR Green (qPCR8) chemistries while targeting identical BLV sequence and using same standards. Also, when a multiplex qPCR (TaqMan) targeting the same BLV sequence and using the same standard was compared to previous ones, agreement was kept high, indicating that the lower sensitivity described for some multiplex qPCRs did not take place in this comparison. The use of an international calibrator and the efficiency estimation (standard curve) might inform variability associated with different chemistries. In contrast, another multiplex assay targeting another region of BLV pol (qPCR2) showed much lower sensitivity and agreement. As qPCR2 is performed as service by private company and oligonucleotide sequences were not available, we were not able to investigate in which proportion each of these two variables contributed to the lower performance of this assay, but we note the addition of 4 µl genomic DNA to this assay that would have an impact the DxSn. In this regard, there is substantial evidence showing that the variability of target sequence among strains from different geographical areas, might affect the sensitivity of BLV qPCRs. Previous studies comparing the pol , gag , tax and env genes reported that the pol gene was the most suitable region to target for diagnostic purposes, since it provided the most-sensitive assays [ 11 , 15 , 56 , 57 , 58 , 59 ]. This might be due in part to higher sequence conservation of pol among strains from different geographical areas. Supporting this observation, it is noticeable how JPN qPCR improved their performance in the current trial, by targeting pol in place of tax , as it did in the previous interlaboratory trial. Since it is a commercial test, we cannot exclude other factors contributing for the performance upgrade observed for this qPCR. In the current study, qPCR3 and ddPCR6 targeting LTR and env sequences, showed lower performances than other assays. Standardization of DNA input into each qPCR would have likely resulted in higher concordance in results. For instance, qPCR1 added 10 µl of genomic DNA per reaction and ddPCR6 added 1 µl of genomic DNA, impacting the resulting sensitivity differences.

Since the sensitivity of each assay and, consequently, the level of agreement among assays might also be influenced by the number of BLV DNA copies present in each sample [ 48 ], we compared the average number of BLV DNA copies between a group of genomic DNA samples that gave concordant results and a group that gave discordant results, and observed that samples that gave discordant results had significantly lower numbers of BLV DNA copies than samples that gave concordant results. Related to this point, the degradation of target DNA during lyophilization, shipment and resuspension, could have been more significant in low-copy compared to high-copy samples. Consequently, the degradation of target DNA in samples with low copies of BLV DNA might have accounted for the greater level of discrepancy within this subset of samples. The rational of adding a large proportion of such samples (34% samples with less than 100 BLV copies per 100 ng of total DNA) was to mimic what is frequently observed in surveillance programs (i.e., hyperacute infection, chronic asymptomatic infection, etc.).

Quantitative methods for the detection of BLV DNA copies are important for segregation programs based on animal level of BLV PVL, as well as for scientific research and the study of BLV dynamics. When the numbers of copies of BLV DNA detected by different assays were compared, in the present study, we observed that although the ability to quantify BLV DNA differed among qPCRs/ddPCR and there were statistically significant differences in the distribution of copy numbers among assays, a strong average correlation was found for the eleven qPCRs/ddPCR. In this regard, the lack of an international calibrator (standard curve) could be a major contributor to the increment of quantitative variation amongst laboratories. For that reason, plasmid pBLV1 containing pol 120 bp sequence was originally constructed for use as standard for quantification and shared with some collaborators (i.e., qPCR7, qPCR8, qPCR 9, qPCR10 and qPCR11). Remarkably, the laboratories used pBLV1 standard in the current trial obtained the most comparable results, indicating that the use of an international standard may have significant impact on the convergence of results; such standard reference material should be prepared under identical conditions. To avoid further variability a detailed protocol for lyophilized DNA sample resuspension, quantitation and template input into each qPCR should be shared with all participants.

Conclusions

BLV DNA was detected with different level of sensitivity in serologically positive samples from different origin and classified into different BLV genotypes. Overall agreement was high; however, we found significant differences in results for the samples with low BLV DNA copy numbers. This second interlaboratory study demonstrated that differences in target sequence, DNA input and calibration curve standards can increase interlaboratory variability considerably. Next steps should focus on (i) standard unification (international gold standard) to estimate individual test efficiency and improve quantitative accuracy amongst tests; (ii) building a new panel of samples with low BLV DNA copy numbers to re-evaluate sensitivity and quantitation of molecular methods. Since no variation was observed in samples from different genotypes, all samples will be collected in Poland to standardize the collection, purification, lyophilization and shipping steps with precise instructions for suspension and constant input volume for the PCR reaction. Finally, we believe that following this standardization approach we will be able to improve overall agreement amongst tests, improving the diagnostic of BLV around the world.

Availability of data and materials

Not applicable.

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

One-way analysis of variance

Bovine leukemia virus

BLV-specific antibodies

Digital PCR

Diagnostic sensitivity

Enzootic bovine leukosis

Enzyme-linked immunosorbent assays

Real-time fluorescence resonance energy transfer PCR

Genomic quality number

Histone H3 family 3A housekeeping gene

Maximum likelihood phylogenetic tree

Multiple-sequence alignment

Peripheral blood leukocytes

Phosphate-buffered saline

Proviral load

Quantitative real-time PCR

Room temperature

World Organisation for Animal Health

Coffin JM, Hughes SH, Varmus HE. (Eds.). 1650–1655 (1997). Retroviruses. Cold Spring Harbor Laboratory Press.

Ghysdael J, Bruck C, Kettmann R, Burny A. Bovine leukemia virus. Curr Top Microbiol Immunol. 1984;112:1–19.

CAS   PubMed   Google Scholar  

Ott SL, Johnson R, Wells SJ. Association between bovine-leukosis virus seroprevalence and herd-level productivity on US dairy farms. Prev Vet Med. 2003;61:249–62.

Article   CAS   PubMed   Google Scholar  

Bartlett PC, et al. Options for the control of bovine leukemia virus in dairy cattle. J Am Vet Med Assoc. 2014;244:914–22.

Article   PubMed   Google Scholar  

Kuczewski A, et al. Economic evaluation of 4 bovine leukemia virus control strategies for Alberta dairy farms. J Dairy Sci. 2019;102:2578–92.

Frie MC, Coussens PM. Bovine leukemia virus: a major silent threat to proper immune responses in cattle. Vet Immunol Immunopathol. 2015;163:103–14.

Panel, E.A. Scientific opinion on enzootic bovine leukosis. EFSA J. 2015;13:4188.

Google Scholar  

OIE. World Animal Health Information Database - Version: 1.4. World Animal Health Information Database. Paris, France: World Organisation for Animal Health; 2009. Available from: http://www.oie.int . Accessed 16 Aug 2024.

Health, W.O.f.A. Manual of diagnostic tests and vaccines for terrestrial animals. Infect Bursal Dis. 2012;12:549–65.

Hutchinson HC, et al. Bovine leukemia virus detection and dynamics following experimental inoculation. Res Vet Sci. 2020;133:269–75.]

Rola-Luszczak M, Finnegan C, Olech M, Choudhury B, Kuzmak J. Development of an improved real time PCR for the detection of bovine leukaemia provirus nucleic acid and its use in the clarification of inconclusive serological test results. J Virol Methods. 2013;189:258–64.

Nakada S, Kohara J, Makita K. Estimation of circulating bovine leukemia virus levels using conventional blood cell counts. J Dairy Sci. 2018;101:11229–36.

Ruggiero VJ, Bartlett PC. Control of Bovine Leukemia Virus in Three US Dairy Herds by Culling ELISA-Positive Cows. Vet Med Int. 2019;2019:3202184.

Article   PubMed   PubMed Central   Google Scholar  

Kobayashi T, et al. Increasing Bovine leukemia virus (BLV) proviral load is a risk factor for progression of Enzootic bovine leucosis: A prospective study in Japan. Prev Vet Med. 2020;178: 104680.

Article   Google Scholar  

Jaworski JP, Pluta A, Rola-Łuszczak M, McGowan SL, Finnegan C, Heenemann K, Carignano HA, Alvarez I, Murakami K, Willems L, Vahlenkamp TW, Trono KG, Choudhury, B, Kuźmak J. Interlaboratory Comparison of Six Real-Time PCR Assays for Detection of Bovine Leukemia Virus Proviral DNA.  J Clin Microbiol. 2018;56(7):e00304-18. https://doi.org/10.1128/JCM.00304-18 .

Pluta A, Rola-Luszczak M, Douville RN, Kuzmak J. Bovine leukemia virus long terminal repeat variability: identification of single nucleotide polymorphisms in regulatory sequences. Virol J. 2018;15:165.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Pluta A, Willems L, Douville RN, Kuźmak J. Effects of Naturally Occurring Mutations in Bovine Leukemia Virus 5'-LTR and Tax Gene on Viral Transcriptional Activity. Pathog. 2020;9(10):836. https://doi.org/10.3390/pathogens9100836 .

Pluta A, et al. Molecular characterization of bovine leukemia virus from Moldovan dairy cattle. Arch Virol. 2017;162:1563–76.

Rola-Łuszczak M, Sakhawat A, Pluta A, Ryło A, Bomba A, Bibi N, Kuźmak J. Molecular Characterization of the env Gene of Bovine Leukemia Virus in Cattle from Pakistan with NGS-Based Evidence of Virus Heterogeneity. Pathogens (Basel, Switzerland). 2021;10(7):910. https://doi.org/10.3390/pathogens10070910 .

Rola-Luszczak M, et al. The molecular characterization of bovine leukaemia virus isolates from Eastern Europe and Siberia and its impact on phylogeny. PLoS ONE. 2013;8: e58705.

Pinheiro de Oliveira TF, et al. Detection of contaminants in cell cultures, sera and trypsin. Biologicals. 2013;41:407–14.

Pluta A, Blazhko NV, Ngirande C, Joris T, Willems L, Kuźmak J. Analysis of Nucleotide Sequence of Tax, miRNA and LTR of Bovine Leukemia Virus in Cattle with Different Levels of Persistent Lymphocytosis in Russia. Pathogens. 2021;10(2):246. https://doi.org/10.3390/pathogens10020246 .

Yang Y, et al. Bovine leukemia virus infection in cattle of China: Association with reduced milk production and increased somatic cell score. J Dairy Sci. 2016;99:3688–97.

DeGraves FJ, Gao D, Kaltenboeck B. High-sensitivity quantitative PCR platform. Biotechniques. 2003;34(106–110):112–105.

Fonseca Junior AA, et al. Evaluation of three different genomic regions for detection of bovine leukemia virus by real-time PCR. Braz J Microbiol. 2021;52:2483–8.

Farias MVN, et al. Toll-like receptors, IFN-gamma and IL-12 expression in bovine leukemia virus-infected animals with low or high proviral load. Res Vet Sci. 2016;107:190–5.

Holland PM, Abramson RD, Watson R, Gelfand DH. Detection of specific polymerase chain reaction product by utilizing the 5’––3’ exonuclease activity of Thermus aquaticus DNA polymerase. Proc Natl Acad Sci U S A. 1991;88:7276–80.

De Brun ML, et al. Development of a droplet digital PCR assay for quantification of the proviral load of bovine leukemia virus. J Vet Diagn Invest. 2022;34:439–47.

Rola-Łuszczak M, Finnegan C, Olech M, Choudhury B, Kuźmak J. Development of an improved real time PCR for the detection of bovine leukaemia provirus nucleic acid and its use in the clarification of inconclusive serological test results. J Virol Methods. 2013;189:258–64.

Petersen MI, Alvarez I, Trono KG, Jaworski JP. Quantification of bovine leukemia virus proviral DNA using a low-cost real-time polymerase chain reaction. J Dairy Sci. 2018;101:6366–74.

Toussaint JF, Sailleau C, Breard E, Zientara S, De Clercq K. Bluetongue virus detection by two real-time RT-qPCRs targeting two different genomic segments. J Virol Methods. 2007;140:115–23.

John EE, et al. Development of a predictive model for bovine leukemia virus proviral load. J Vet Intern Med. 2022;36:1827–36.

Farias MVN, et al. Toll-like receptors, IFN-γ and IL-12 expression in bovine leukemia virus-infected animals with low or high proviral load. Res Vet Sci. 2016;107:190–5.

Yoneyama S, et al. Comparative Evaluation of Three Commercial Quantitative Real-Time PCRs Used in Japan for Bovine Leukemia Virus. Viruses. 2022;14:1182.

Polat M, Takeshima SN, Aida Y. Epidemiology and genetic diversity of bovine leukemia virus. Virol J. 2017;14:209.

Lee E, et al. Molecular epidemiological and serological studies of bovine leukemia virus (BLV) infection in Thailand cattle. Infect Genet Evol. 2016;41:245–54.

Duran-Yelken S, Alkan F. Molecular analysis of the env, LTR, and pX regions of bovine leukemia virus in dairy cattle of Türkiye. Virus Genes. 2024;60:173–85.

Lv G, Wang J, Lian S, Wang H, Wu R. The Global Epidemiology of Bovine Leukemia Virus: Current Trends and Future Implications. Animals. 2024;14(2):297. https://doi.org/10.3390/ani14020297 .

Úsuga-Monroy C, Díaz FJ, Echeverri-Zuluaga JJ, González-Herrera LG, López-Herrera A. Presence of bovine leukemia virus genotypes 1 and 3 in Antioquia, Colombia. Revista UDCA Actualidad & Divulgación Científica. 2018;21:119–26.

Úsuga-Monroy C, Díaz FJ, González-Herrera LG, Echeverry-Zuluaga JJ, López-Herrera A. Phylogenetic analysis of the partial sequences of the env and tax BLV genes reveals the presence of genotypes 1 and 3 in dairy herds of Antioquia. Colombia VirusDisease. 2023;34:483–97.

Martin D, et al. Comparative study of PCR as a direct assay and ELISA and AGID as indirect assays for the detection of bovine leukaemia virus. J Vet Med B Infect Dis Vet Public Health. 2001;48:97–106.

Cohen J. A Coefficient of Agreement for Nominal Scales. Educ Psychol Measur. 1960;20:37–46.

Feinstein AR, Cicchetti DV. High agreement but low kappa: I. The problems of two paradoxes. J Clin Epidemiol. 1990;43:543–9.

Woolhouse MEJ, et al. Heterogeneities in the transmission of infectious agents: Implications for the design of control programs. Proc Natl Acad Sci. 1997;94:338–42.

Ohshima K, Okada K, Numakunai S, Kayano H, Goto T. An eradication program without economic loss in a herd infected with bovine leukemia virus (BLV). Nihon Juigaku Zasshi. 1988;50:1074–8.

Juliarena MA, Gutierrez SE, Ceriani C. Determination of proviral load in bovine leukemia virus-infected cattle with and without lymphocytosis. Am J Vet Res. 2007;68:1220–5.

Mirsky ML, Olmstead CA, Da Y, Lewin HA. The prevalence of proviral bovine leukemia virus in peripheral blood mononuclear cells at two subclinical stages of infection. J Virol. 1996;70:2178–83.

Eaves FW, Molloy JB, Dimmock CK, Eaves LE. A field evaluation of the polymerase chain reaction procedure for the detection of bovine leukaemia virus proviral DNA in cattle. Vet Microbiol. 1994;39:313–21.

Juliarena MA, Barrios CN, Ceriani MC, Esteban EN. Hot topic: Bovine leukemia virus (BLV)-infected cows with low proviral load are not a source of infection for BLV-free cattle. J Dairy Sci. 2016;99:4586–9.

Yuan Y, et al. Detection of the BLV provirus from nasal secretion and saliva samples using BLV-CoCoMo-qPCR-2: Comparison with blood samples from the same cattle. Virus Res. 2015;210:248–54.

Taxis TM, et al. Reducing bovine leukemia virus prevalence on a large midwestern dairy farm by using lymphocyte counts, ELISA antibody testing, and proviral load. The Bovine Practitioner. 2020;54:136–44.

McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012;22:276–82.

Cicchetti DV, Feinstein AR. High agreement but low kappa: II. Resolving the paradoxes J Clin Epidemiol. 1990;43:551–8.

Heenemann K, et al. Development of a Bovine leukemia virus polymerase gene–based real-time polymerase chain reaction and comparison with an envelope gene–based assay. J Vet Diagn Invest. 2012;24:649–55.

Lew AE, et al. Sensitive and specific detection of proviral bovine leukemia virus by 5′ Taq nuclease PCR using a 3′ minor groove binder fluorogenic probe. J Virol Methods. 2004;115:167–75.

Takeshima SN, Sasaki S, Meripet P, Sugimoto Y, Aida Y. Single nucleotide polymorphisms in the bovine MHC region of Japanese Black cattle are associated with bovine leukemia virus proviral load. Retrovirology. 2017;14:24.

Debacq C, et al. Reduced proviral loads during primo-infection of sheep by Bovine Leukemia virus attenuated mutants. Retrovirology. 2004;1:31.

Kuckleburg CJ, et al. Detection of bovine leukemia virus in blood and milk by nested and real-time polymerase chain reactions. J Vet Diagn Invest. 2003;15:72–6.

Dube S, et al. Degenerate and specific PCR assays for the detection of bovine leukaemia virus and primate T cell leukaemia/lymphoma virus pol DNA and RNA: phylogenetic comparisons of amplified sequences from cattle and primates from around the world. J Gen Virol. 1997;78(Pt 6):1389–98.

Download references

Acknowledgements

The authors thank Luc Willems (University of Liège, Belgium) for plasmid DNA sample pBLV344; Marlena Smagacz and Eliza Czarnecka (National Veterinary Research Institute, Poland) for lyophilizing DNA samples and DNA analysis, respectively; Ali Sakhawat (Animal Quarantine Department, Pakistan), Vitaliy Bolotin (National Scientific Center IECVM, Ukraine), Frank van der Meer and Sulav Shrestha (University of Calgary, Canada) for sharing material.

The APC was funded by the National Veterinary Research Institute, Puławy, Poland.

Author information

Authors and affiliations.

Department of Biochemistry, National Veterinary Research Institute, Puławy, 24-100, Poland

Aneta Pluta & Jacek Kuźmak

Instituto de Virología E Innovaciones Tecnológicas (IVIT), Centro de Investigaciones en Ciencias Veterinarias y Agronómicas (CICVyA), Instituto Nacional de Tecnología Agropecuaria (INTA) - CONICET, Buenos Aires, Argentina

Juan Pablo Jaworski & Vanesa Ruiz

CentralStar Cooperative, 4200 Forest Rd, Lansing, MI, 48910, USA

Casey Droscha & Sophie VanderWeele

Department of Animal Science, College of Agriculture and Natural Resources, Michigan State University, East Lansing, Michigan, 48824, USA

Tasia M. Taxis

Niort Laboratory, Unit Pathology and Welfare of Ruminants, French Agency for Food, Environmental and Occupational Health and Safety (Anses), Ploufragan-Plouzané, Niort, France

Stephen Valas

Croatian Veterinary Institute, Savska Cesta 143, Zagreb, 10000, Croatia

Dragan Brnić & Andreja Jungić

Laboratorio Central de Veterinaria (LCV), Ministry of Agriculture, Fisheries and Food, Carretera M-106 (Km 1,4), Madrid, Algete, 28110, Spain

María José Ruano & Azucena Sánchez

Department of Veterinary Sciences, Faculty of Agriculture, Iwate University, 3-18-8 Ueda, Morioka, 020-8550, Japan

Kenji Murakami & Kurumi Nakamura

Departamento de Patobiología, Facultad de Veterinaria, Unidad de Microbiología, Universidad de La República, Ruta 8, Km 18, Montevideo, 13000, Uruguay

Rodrigo Puentes & MLaureana De Brun

Laboratorio de Virología, Departamento SAMP, Centro de Investigación Veterinaria de Tandil-CIVETAN (CONICET/UNCPBA/CICPBA), Buenos Aires, Argentina

Marla Eliana Ladera Gómez, Pamela Lendez & Guillermina Dolcini

Laboratório Federal de Defesa Agropecuária de Minas Gerais, Pedro Leopoldo, Brazil

Marcelo Fernandes Camargos & Antônio Fonseca

Department of Pathobiology, College of Veterinary Medicine, Auburn University, Auburn, AL, 36849-5519, USA

Subarna Barua & Chengming Wang

Department of Omics Analyses, National Veterinary Research Institute, 24-100, Puławy, Poland

Aneta Pluta & Aleksandra Giza

You can also search for this author in PubMed   Google Scholar

Contributions

Proposed the conception and design of the study, A.P.; data curation, A.P., J.P.J., C.D., S.V., D.B., A.S., K.M., R.P., G.D., M.F.C. and CH.W.; investigation, A.P., V.R., S.VW., S.V., A.J., M.J.R., K.N., M.L.B., M.L.G., P.L., A.F., A.G. and S.B., formal analysis, A.P.; statistical analysis, A.P.; database analysis, A.P., visualization of the results, A.P.; resources, A.P., T.M.T. and J.K; writing—original draft preparation, A.P., J.P.J.; writing—review and editing, A.P., J.P.J., C.D., S.VW., T.M.T. and J.K; project administration, A.P. All authors read and approved the submitted version.

Corresponding author

Correspondence to Aneta Pluta .

Ethics declarations

Ethics approval and consent to participate.

The study was approved by the Veterinary Sciences Animal Care Committee No. AC21-0210, Canada; the Institutional Animal Care and Use Committee No. PROTO202000096 from 4/13/2020 to 4/14/2023, Michigan State University, United States and the Ethics Review Board, COMSATS Institute of Information Technology, Islamabad, Pakistan, no. CIIT/Bio/ERB/17/26. Blood samples from Polish, Moldovan and Ukrainian cattle, naturally infected with BLV, were selected from collections at local diagnostic laboratories as part of the Enzootic bovine leukosis (EBL) monitoring program between 2012 and 2018 and sent to the National Veterinary Research Institute (NVRI) in Pulawy for confirmation study. The approval for collection of these samples from ethics committee was not required according to Polish regulation (“Act on the Protection of Animals Used for Scientific or Educational Purposes”, Journal of Laws of 2015). All methods were carried out in accordance with relevant guidelines and regulations. The owners of the cattle herds from which the DNA samples originated, the district veterinarians caring for these farms and the ministries of agriculture were informed and consented to the collection of blood from the animals for scientific purposes and the sending of samples to NVRI.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

12917_2024_4228_moesm1_esm.pdf.

Additional file 1. Copy of the instruction included with the panel of 44 DNA samples sent to participating laboratories for dilution of the lyophilisates

12917_2024_4228_MOESM2_ESM.png

Additional file 2. Detection of the H3F3A gene copy number in 43 DNA samples; no outlier was found for any samples ( P <0.05) (two-sided).

12917_2024_4228_MOESM3_ESM.docx

Additional file 3. Concentration values of 44 DNA samples measured by the 11 participating laboratories (given in ng per µl)

12917_2024_4228_MOESM4_ESM.pdf

Additional file 4. Post hoc - Dunn-Bonferroni-Tests. The Dunn-Bonferroni test revealed that the pairwise group comparisons of qPCR2 - qPCR4, qPCR3 - ddPCR6, qPCR4 - qPCR5, qPCR4 - ddPCR6, qPCR4 - qPCR9, qPCR4 - qPCR10, qPCR5 - qPCR11, ddPCR6 - qPCR11 and qPCR9 - qPCR11 have an adjusted p-value less than 0,05

12917_2024_4228_MOESM5_ESM.docx

Additional file 5. Kendall's Tau correlation coefficient values measured between each pair of assays. The numbers 1 to 11 in the first column and last row of the table indicate the names of the assays qPCR1-qPCR5, ddPCR6, qPCR7-qPCR11 respectively

12917_2024_4228_MOESM6_ESM.png

Additional file 6. Maximum-likelihood phylogenetic analysis of full-length BLV-pol gene sequences representing 7 BLV genotypes (G1, G2, G3, G4, G6, G9, and G10) (A); (B) env-based sequences assigned to 10 BLV genotypes (G1, G2, G3, G4, G5, G6, G7, G8, G9, and G10); (C) LTR-based sequences representing 10 BLV genotypes (G1-G10). For all genes and LTR region the Tamura-Nei model and Bootstrap replications (1,000) were applied in MEGA X

12917_2024_4228_MOESM7_ESM.pdf

Additional file 7. Multiple sequence alignment of reverse transcriptase, integrase, envelope and LTR sequences in the context of the specific primers used by different qPCR assays. (A) Multiple sequence alignment of reverse transcriptase (pol gene) sequences in the context of qPCR7, qPCR8, qPCR9, qPCR10 and qPCR11 assay primers. (B) Multiple sequence alignment of integrase (pol gene) sequences in the context of qPCR4 assay primers. (C) Multiple sequence alignment of env gene sequences in the context of ddPCR6. (D) Sequence alignment of LTR region sequences in the context of qPCR3 method primers

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Pluta, A., Jaworski, J.P., Droscha, C. et al. Inter-laboratory comparison of eleven quantitative or digital PCR assays for detection of proviral bovine leukemia virus in blood samples. BMC Vet Res 20 , 381 (2024). https://doi.org/10.1186/s12917-024-04228-z

Download citation

Received : 24 November 2023

Accepted : 09 August 2024

Published : 26 August 2024

DOI : https://doi.org/10.1186/s12917-024-04228-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Bovine leukemia virus ( BLV)
  • Quantitative real-time PCR (qPCR)
  • Proviral DNA
  • BLV international network
  • Update on the efforts in harmonization qPCR

BMC Veterinary Research

ISSN: 1746-6148

chapter 3 in research quantitative

IMAGES

  1. sample chapter 3 quantitative research

    chapter 3 in research quantitative

  2. Chapter 3 Methodology Example In Research Chapter 3 R

    chapter 3 in research quantitative

  3. Chapter 3 Pdf Quantitative Research Research Design

    chapter 3 in research quantitative

  4. Chapter 3 Research 12 Quantitative Research

    chapter 3 in research quantitative

  5. Chapter 3

    chapter 3 in research quantitative

  6. Sample of Chapter 3 in Research Paper (Probability and Statistics

    chapter 3 in research quantitative

VIDEO

  1. Lesson 6: Chapter 3

  2. Difference between Quantitative and Qualitative Research || The Differences Explained ||

  3. #chapter3

  4. Quantitative Research Purposes: Updating the Previous Theories

  5. CRASH COURSE ON PROJECT WRITING: CHAPTER 3 (RESEARCH METHODOLOGY)

  6. Chapter 3 Research Methodology

COMMENTS

  1. Chapter 3

    Sample Chapter 3 chapter methodology this chapter reveals the methods of research to be employed the researcher in conducting the study which includes the. ... quantitative research is a structured way of collecting and analyzing data obtained from different sources. Quantitative research involves the use of computational, statistical, and ...

  2. (PDF) Chapter 3 Research Design and Methodology

    Research Design and Methodology. Chapter 3 consists of three parts: (1) Purpose of the. study and research design, (2) Methods, and (3) Statistical. Data analysis procedure. Part one, Purpose of ...

  3. CHAPTER 3 METHODOLOGY 1. INTRODUCTION

    2. RESEARCH DESIGN. This research is exploratory in nature as it attempts to explore the experiences of mothers of incest survivors. Their subjective perceptions formed the core data of the study; hence it needed the method that would deal with the topic in an exploratory nature. For the purpose of this study, the research paradigm that was ...

  4. Quantitative Research Chapter 3

    Quantitative Research Chapter 3 - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. The chapter discusses the research methodology used in the study. It will use a quantitative descriptive method to study subjects using a questionnaire sent to respondents. The questionnaire will be validated by experts and tested on a small sample before ...

  5. PDF CHAPTER 3: METHODOLOGY

    CHAPTER 3: METHODOLOGY The methods used in this research consist of a combination of quantitative and qualitative approaches: a "mixed methods" approach, which is described in more detail in this chapter. The first section explains the rationale for using a mixed methods approach and ethical and practical issues.

  6. PDF CHAPTER III RESEARCH METHODOLOGY

    CHAPTER IIIRESEARCH METHODOLOGYChapter three presents the method. logy in conducting the research. This chapter provides four main parts of the investigation: research design, data collection technique, research procedu. technique.3. 1 Research DesignThe research employed quantitative method in the form of quasi experimental des.

  7. Chapter 3: Overview of Quantitative Research Flashcards

    The "cause" or the variable thought to influence the dependent variable; in experimental research it is the variable manipulated by the researcher. dependent variable The "effect"; a response or behavior that is influenced by the independent variable; sometimes called the criterion variable.

  8. Chapter 3 Quantitative Research Methodology

    Chapter-3-Quantitative-Research-Methodology - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document discusses research methodology and design. It describes different types of research designs including experimental, quasi-experimental, and non-experimental designs. It also discusses key elements of methodology like respondents, sampling, and statistical ...

  9. Chapter 3: Quantitative Research Flashcards

    Terms in this set (36) Features of quantitative research. objectivity, confirmation, experimentation, predictions, hypothesis testing, generalizability, large samples. Experimental methods. comparison between groups using random assignments. researchers tries to control as much as possible. can assert causality. reduced bias.

  10. Chapter 3 Quantitative Research

    CHAPTER 3: QUANTITATIVE RESEARCH. Objectives: Describes characteristics, strengths, weaknesses, and kinds of quantitative research. Explain the kinds of quantitative research design. Random sampling is recommended in determining the sample size to avoid researcher's bias in interpreting the results. 6.

  11. Chapter 3 Sample

    According to Creative Research Systems (2016) correlation method is "a statistical technique that can show whether and how strongly pairs of variables are related" ("Correlation," para. 1). Moreover, qualitative method in terms of structured interview was utilized to provide a deep understanding about the research phenomenon.

  12. Chapter 3

    Chapter 3 - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. This chapter outlines the research methodology used in the study. It discusses the research design, including the conceptual framework and data collection methods. The study uses a quantitative approach, collecting primary data through questionnaires distributed to retailers and ...

  13. PDF Chapter III Research Method

    Research Method. search in order to find theaccur. te, and effective research. This chapter pre. ents the description of theresear. h method used in the study. It includes resea. ch design, research method,research instrument. To prevent that the research is going astray, a methodology is required, as. Hasan (2004) said that Penyaluran rasa ...

  14. Quantitative structure-activity relationship

    Quantitative Structure-Activity Relationship (QSAR) is a powerful and widely used computational approach in the field of chemistry and drug design.

  15. Fact-checking warnings from Democrats about Project 2025 and ...

    In 2019, the Trump's administration finalized a rule that expanded overtime pay eligibility to most salaried workers earning less than about $35,568, which it said made about 1.3 million more ...

  16. Chapter III (Quantitative Research)

    This chapter outlines the research methodology used in the study. It describes a survey-correlational research design to investigate the relationship between habitual alcohol use and academic performance among senior high school students. The chapter details the participants, data collection instruments, and statistical analysis methods. Specifically, it involved surveying 526 randomly ...

  17. Chapter 3 Results and Discussion

    Chapter III RESULTS AND DISCUSSION. The presentation, analysis, and interpretation of the data acquired for this study are all included in this chapter. According to the methodology, statistical tools are utilized to determine the student's perceptions towards the preservation and improvement of Central Luzon State University's landmark.

  18. Low Latency Inference Chapter 1: Up to 1.9X Higher Llama 3.1

    With Medusa, an HGX H200 is able to produce 268 tokens per second per user for Llama 3.1 70B and 108 for Llama 3.1 405B. This is over 1.5X faster on Llama 3.1 70B and over 1.9X faster on Llama 3.1 405B than without Medusa. Although there is variability in the Medusa acceptance rate between tasks depending on how the heads are fine-tuned, its overall performance is generalized across a wide ...

  19. Corporate Social Responsibility in the Supermarket Sector ...

    3.2 Quantitative Research. 330 questionnaires were distributed via online surveys from May to August 2022. Respondents chosen were ones who answered a question of whether they shop at supermarkets such as Coop mart, Citi mart, Lotte mart, etc. at least 2 times in 3 months or more. The authors collected 306 valid surveys.

  20. Financial Information Quality: Analysis of Cloud Accounting ...

    The objective of this research study was to investigate the impact of Cloud accounting on the quality of financial information in the UAE. Utilizing a quantitative survey approach, the study employed a questionnaire as the primary data collection tool. A total of 233 participants from various industries provided responses to the questionnaire.

  21. Chapter 3 Research 12 Quantitative Research

    Chapter 3 Research 12 Quantitative Research. Sample of Quantitative research chapter 3 grade 12 abm. Course. BS Psychology (GE4) 186 Documents. Students shared 186 documents in this course. University Makati Science Technological Institute of the Philippines. Academic year: 2020/2021. Uploaded by:

  22. How to Write Up the Results Chapter for Quantitative Research

    Speaker 1: In this video, we're going to explain exactly how to write up the results chapter for a quantitative study, whether that's a dissertation, thesis, or any other kind of academic research project. We'll walk you through the process step by step so that you can craft your results section with confidence. So, grab a cup of coffee, grab a cup of tea, whatever works for you, and let's ...

  23. Rizal Technological University College of Arts and Sciences: Research

    Chapter 3 - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. This chapter outlines the research methodology used in the study. A quantitative approach using a survey research design was used to collect data from 420 student respondents across three universities in Metro Manila. A questionnaire was developed and validated to measure students ...

  24. Inter-laboratory comparison of eleven quantitative or digital PCR

    The detection limit was tested using a plasmid containing the target of the qPCRs, starting at 10 3 ng/µl. Method qPCR4. The quantitative real-time PCR was done with the primers for the BLV pol gene as previously described . The qPCR reaction mix contained 1 × PCR Master Mix with SYBR Green (FastStart Universal SYBR Green Master Rox, Roche ...

  25. Chapter 3

    chapter 3 - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free. This chapter outlines the research methodology used in the study. A mixed methods approach was used to collect both quantitative and qualitative data on the health, lifestyle, and job performance of police security personnel. A sample of 100 respondents was selected using convenient ...

  26. Chapter 3 Methodology

    The chapter outlines the methodology used in the research study. The researchers used a descriptive research method to obtain information about the current status of paraphrasing math word problems. The study was conducted at Cavite State University-Rosario Secondary Education Laboratory School and involved randomly selected students comprising 30% of the school population. Questionnaires were ...

  27. SAMPLE-CHAPTER-3Grades-ang-DV research

    SAMPLE-CHAPTER-3Grades-ang-DV research - Free download as Word Doc (.doc / .docx), PDF File (.pdf), Text File (.txt) or read online for free.