Domain-Specific Language Models in Healthcare: A Framework for Trust, Governance, and Safe Deployment

Authors

  • Preeti Tupsakhare Engineer Lead - Medical Benefit Management Information Technology, Elevance Health, USA Author

Keywords:

Domain-specific Language models, AI, Artificial Intelligence, customized healthcare LMs

Abstract

General-purpose large language models (LLMs) have demonstrated significant potential in healthcare applications but remain ill-suited for high-stakes clinical environments due to limited domain awareness, lack of explainability, and opaque reasoning. This white paper introduces Domain-Specific Language Models (DSLMs) as a purpose-built alternative designed to meet the trust, safety, and regulatory demands of healthcare systems. It presents a comprehensive framework for the development, validation, and deployment of DSLMs, emphasizing data governance, model transparency, bias mitigation, and regulatory alignment. The paper outlines practical strategies for integrating DSLMs across clinical and administrative workflows while maintaining human oversight and patient safety. Collectively, this framework provides healthcare organizations with a scalable and responsible pathway for adopting AI in mission-critical settings.

References

Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing. ACM Trans. Comput. Healthcare 3, 1, Article 2 (January 2022), 23 pages. https://doi.org/10.1145/3458754.

Benyou Wang, Qianqian Xie, Jiahuan Pei, Zhihong Chen, Prayag Tiwari, Zhao Li, and Jie Fu. 2023. Pre-trained Language Models in Biomedical Domain: A Systematic Survey. ACM Comput. Surv. 56, 3, Article 55 (March 2024), 52 pages. https://doi.org/10.1145/3611651

M. Firdaus and K. -H. Rhee, "Towards Trustworthy Collaborative Healthcare Data Sharing," 2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Istanbul, Turkiye, 2023, pp. 4059-4064, doi: 10.1109/BIBM58861.2023.10385319.

Yacine Jernite, Huu Nguyen, Stella Biderman, Anna Rogers, Maraim Masoud, Valentin Danchev, Samson Tan, Alexandra Sasha Luccioni, Nishant Subramani, Isaac Johnson, Gerard Dupont, Jesse Dodge, Kyle Lo, Zeerak Talat, Dragomir Radev, Aaron Gokaslan, Somaieh Nikpoor, Peter Henderson, Rishi Bommasani, and Margaret Mitchell. 2022. Data Governance in the Age of Large-Scale Data-Driven Language Technology. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 2206–2222. https://doi.org/10.1145/3531146.3534637

Kosar, T., Martı, P.E., Barrientos, P.A. and Mernik, M., 2008. A preliminary study on various implementation approaches of domain-specific language. Information and software technology, 50(5), pp.390-405.

Nazi ZA, Peng W. Large Language Models in Healthcare and Medical Domain: A Review. Informatics. 2024; 11(3):57. https://doi.org/10.3390/informatics11030057

A. Wagh and A. Mavle, "Review of Language Models in the Financial Domain," 2024 Intelligent Systems and Machine Learning Conference (ISML), Hyderabad, India, 2024, pp. 140-145, doi: 10.1109/ISML60050.2024.11007354.

Bloom, B. (2023). How Technology Can Empower Diversity in Law Firms. Solic. J., 166, 62.

Li, Y., Gao, C., Song, X., Wang, X., Xu, Y., & Han, S. (2023). Druggpt: A gpt-based strategy for designing potential ligands targeting specific proteins. bioRxiv, 2023-06.

Luukkonen, R., Komulainen, V., Luoma, J., Eskelinen, A., Kanerva, J., Kupari, H. M., ... & Pyysalo, S. (2023, December). FinGPT: Large generative models for a small language. In Proceedings of the 2023 conference on empirical methods in natural language processing (pp. 2710-2726).

Downloads

How to Cite

Preeti Tupsakhare. (2026). Domain-Specific Language Models in Healthcare: A Framework for Trust, Governance, and Safe Deployment. JOURNAL OF RECENT TRENDS IN COMPUTER SCIENCE AND ENGINEERING ( JRTCSE), 14(1), 7-15. https://jrtcse.com/index.php/home/article/view/JRTCSE.2026.14.1.2