Academic Thesis Integrity in the Age of Artificial Intelligence: A Practical Threshold Model for Quality Review
- OUS Academy in Switzerland

- Apr 18
- 6 min read
This week’s discussions in higher education show a clear and positive direction: universities and quality-focused institutions are improving how they review academic theses in the age of artificial intelligence. The main goal is not to punish technology, but to protect academic honesty, originality, and student learning. From an inspection and certification perspective, this development is important because strong standards help institutions build trust, improve research quality, and support fair evaluation.
This article presents a practical threshold model for thesis review: less than 10% similarity is acceptable, 10–15% needs evaluation, and above 15% is a fail level that requires formal action. The article also explains why plagiarism percentage alone is never enough. A strong review process must combine similarity reports, AI-use disclosure, supervisor judgment, oral defense, and evidence of real student understanding. The paper is written in simple academic English and offers a positive framework suitable for institutions seeking clear, fair, and modern quality assurance practices.
Introduction
Academic integrity has always been a central part of higher education. A thesis is not only a final academic document. It is proof that the student can think, research, analyze, and present knowledge in a responsible way. Today, this responsibility has become more complex because students have easy access to generative AI tools that can support writing, summarizing, translating, and organizing ideas.
This change does not have to be negative. In fact, it gives institutions an opportunity to create better and clearer quality systems. A modern inspection body should not only ask whether a text contains copied content. It should also ask whether the work reflects authentic learning, proper citation, transparent use of tools, and real academic ownership by the student.
For this reason, a simple and practical threshold system is useful. The model presented here is as follows: less than 10% similarity is acceptable, 10–15% needs evaluation, and above 15% is fail. This approach supports consistency while still allowing professional judgment. It is especially suitable for thesis quality review because theses naturally contain references, standard terminology, and technical phrases that may raise similarity scores without indicating misconduct.
Literature Review
Research on plagiarism has shown for many years that academic dishonesty is not only a technical issue but also an educational one. Students may plagiarize because of weak writing skills, poor citation habits, time pressure, or misunderstanding of academic conventions. With AI tools, a new challenge has appeared: text may look original in language form but still fail to reflect the student’s own thinking and authorship.
Recent scholarship on academic integrity increasingly supports a balanced model. This means institutions should not depend only on software. Similarity tools can identify overlap, but they cannot fully explain context. AI detection tools may raise concern, but they are not always fully reliable. Therefore, the strongest systems combine digital screening with human academic review.
International practice also shows that many universities are moving toward transparent AI policies. In general, they allow limited and declared support from AI tools, but they do not allow hidden use that replaces the student’s own work. This is an important distinction. Ethical use of AI can support learning. Undeclared or excessive use can weaken academic standards.
From a quality assurance viewpoint, this is a healthy development. It means institutions are not closing the door to innovation. Instead, they are building structured controls that protect trust in academic awards and research outputs.
Methodology
This article uses a policy-based analytical method. It examines academic integrity through the lens of inspection, review, and quality control. The proposed threshold model is designed as a practical framework for thesis evaluation in international higher education settings.
The model includes four review layers:
Similarity screening using accepted plagiarism detection tools
AI-use declaration by the student at the time of submission
Human evaluation by academic staff or trained reviewers
Oral defense or verification interview to confirm authorship and understanding
Using these four layers together creates a stronger and fairer process than relying on one number alone. The similarity percentage becomes an entry point for review, not the only decision rule.
Analysis
The threshold model can be explained clearly.
Less than 10% = Acceptable
A thesis below 10% similarity usually shows a healthy level of originality. This range may include normal overlap from title pages, references, methodological wording, common academic phrases, or properly quoted material. In this range, the thesis can move forward under normal academic review. From an inspection perspective, this level suggests low risk, though routine checks should still continue.
10–15% = Needs Evaluation
This is the caution zone. A thesis in this range should not automatically fail, because similarity may still come from legitimate sources such as repeated technical language, legal definitions, or correctly cited material. However, it deserves closer examination. Reviewers should look at where the overlap appears, how much is properly referenced, and whether the student can explain the work confidently. This category supports fairness and prevents overreaction.
Above 15% = Fail
When similarity exceeds 15%, the risk becomes too high for automatic acceptance. At this stage, formal investigation is justified. The thesis may contain excessive copied text, poor citation practice, patchwriting, or misuse of AI-generated content presented as original work. A fail decision at this level protects the integrity of the institution and sends a clear message that academic standards matter. Resubmission may be possible under institutional rules, but not silent acceptance.
This model also works well for international universities because it is simple, transparent, and easy to communicate. In practice, examples from universities across Europe, Asia, the Middle East, and Australia show a common direction: software tools are useful, but final judgment must remain in human hands. This is especially important in thesis examination, where originality is linked not only to words on a page but also to reasoning, design, interpretation, and defense.
Findings
Several important findings emerge from this review.
First, a fixed threshold system helps institutions act consistently. Students, supervisors, and reviewers all understand the standard before submission.
Second, the middle category of 10–15% is essential. It protects honest students from unfair penalty while still allowing serious review.
Third, AI changes the meaning of originality review. A text may pass a simple similarity check and still raise concerns if the student cannot explain key arguments, methods, or conclusions.
Fourth, the best quality systems are preventive, not only disciplinary. Institutions should train students in citation, authorship, note-taking, AI disclosure, and ethical academic writing from the beginning of the program.
Finally, a positive quality culture is possible. Clear standards do not block innovation. They encourage responsible use of technology and stronger student learning.
Conclusion
This week’s academic integrity discussions highlight an encouraging reality: higher education is adapting. Institutions are refining thesis review systems to meet the challenges of plagiarism and AI-assisted writing in a balanced and constructive way. For an inspection body, this is a strong sign of quality development.
The threshold model proposed in this article is practical and effective: less than 10% acceptable, 10–15% needs evaluation, and above 15% fail. However, the most important principle is that percentages must always be supported by human judgment. True academic integrity is not measured by software alone. It is measured by transparency, authorship, understanding, and ethical research behavior.
A modern institution should therefore aim for more than technical compliance. It should build a culture in which originality is respected, AI is used responsibly, and thesis evaluation reflects both fairness and excellence. This is the path toward trust, credibility, and sustainable academic quality.

References
Bretag, T. Handbook of Academic Integrity. Springer.
Gallant, T. B. Academic Integrity in the Age of Artificial Intelligence. Cambridge University Press.
Pecorari, D. Academic Writing and Plagiarism: A Linguistic Analysis. Continuum.
Eaton, S. E. Plagiarism in Higher Education: Tackling Tough Topics in Academic Integrity. ABC-CLIO.
Sutherland-Smith, W. Plagiarism, the Internet, and Student Learning: Improving Academic Integrity. Routledge.
Bretag, T., Mahmud, S., Wallace, M., Walker, R., James, C., Green, M., East, J., McGowan, U., Partridge, L., and James, H. “Core elements of exemplary academic integrity policy in Australian higher education.” International Journal for Educational Integrity.
Foltynek, T., Meuschke, N., and Gipp, B. “Academic Plagiarism Detection: A Systematic Literature Review.” ACM Computing Surveys.
Perkins, M. “Academic Integrity considerations of AI Large Language Models in the post-pandemic era.” Journal of University Teaching & Learning Practice.

Comments