CLOP-hERG: The Contrastive Learning Optimized Pre-Trained Model for Representation Learning in Predicting Drug-Induced hERG Channel Blockers
Keywords:hERG, contrastive learning, RoBERTa, representation learning
During drug development, ensuring that drug molecules do not block the hERG (human Ether-à-go-go-Related Gene) channel is critical. If this channel is blocked, it can cause many cardiovascular-related diseases. However, traditional experimental detection methods are expensive and time-consuming. In this work, we proposed a novel deep learning framework CLOP-hERG, that combines contrastive learning with the RoBERTa pre-trained model to predict whether drug molecules will block the hERG channel. We employed data augmentation techniques on molecular structures to ensure that our model can capture the multifaceted information of the molecules. Besides, we used a contrastive learning strategy to enable the model to learn meaningful molecular features from large unlabeled datasets. The RoBERTa pre-trained model played a pivotal role in this process, giving our model with a robust representational learning capability. The model, obtained through contrastive learning, was further fine-tuned to achieve high-precision prediction of hERG blockers. Through a series of experiments, we demonstrated the effectiveness of CLOP-hERG. This work provides a novel and effective strategy for predicting hERG blockers and provides some insights for other similar pharmaceutical tasks.
Received: 13 November 2023 | Revised: 21 December 2023 | Accepted: 29 December 2023
Conflicts of Interest
The authors declare that they have no conflicts of interest to this work.
Data Availability Statement
The data that support the findings of this study are publicly available: Therapeutics Data Commons: https://tdcommons.ai/; GitHub: https://github.com/GIST-CSBL/BayeshERG; PubChem: https://pubchem.ncbi.nlm.nih.gov/.
How to Cite
Copyright (c) 2024 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.