Assessing GPT and DeepL for terminology translation in the medical domain: A comparative study on the human phenotype ontology

BMC Med Inform Decis Mak. 2025 Jul 1;25(1):237. doi: 10.1186/s12911-025-03075-8.

Abstract

Background: This paper presents a comparative study of two state-of-the-art language models, OpenAI's GPT and DeepL, in the context of terminology translation within the medical domain.

Methods: This study was conducted on the human phenotype ontology (HPO), which is used in medical research and diagnosis. Medical experts assess the performance of both models on a set of 120 translated HPO terms and their 180 synonyms, employing a 4-point Likert scale (strongly agree = 1, agree = 2, disagree = 3, strongly disagree = 4). An independent reference translation from the HeTOP database was used to validate the quality of the translation.

Results: The average Likert rating for the selected HPO terms was 1.29 for GPT-3.5 and 1.37 for DeepL. The quality of the translations was also found to be satisfactory for multi-word terms with greater ontological depth. The comparison with HeTOP revealed a high degree of similarity between the models' translations and the reference translations.

Conclusions: Statistical analysis revealed no significant differences in the mean ratings between the two models, indicating their comparable performance in terms of translation quality. The study not only illustrates the potential of machine translation but also shows incomplete coverage of translated medical terminology. This underscores the relevance of this study for cross-lingual medical research. However, the evaluation methods need to be further refined, specific translation issues need to be addressed, and the sample size needs to be increased to allow for more generalizable conclusions.

Keywords: Artificial intelligence; Controlled vocabulary; GPT; Translations.

Publication types

  • Comparative Study

MeSH terms

  • Biological Ontologies*
  • Humans
  • Phenotype*
  • Terminology as Topic*
  • Translating*