Die u:cris Detailansicht:
Learning on compressed molecular representations
- Autor(en)
- Jan Weinreich, Daniel Probst
- Abstrakt
Last year, a preprint gained notoriety, proposing that a k-nearest neighbour classifier is able to outperform large-language models using compressed text as input and normalised compression distance (NCD) as a metric. In chemistry and biochemistry, molecules are often represented as strings, such as SMILES for small molecules or single-letter amino acid sequences for proteins. Here, we extend the previously introduced approach with support for regression and multitask classification and subsequently apply it to the prediction of molecular properties and protein-ligand binding affinities. We further propose converting numerical descriptors into string representations, enabling the integration of text input with domain-informed numerical descriptors. Finally, we show that the method can achieve performance competitive with chemical fingerprint- and GNN-based methodologies in general, and perform better than comparable methods on quantum chemistry and protein-ligand binding affinity prediction tasks.
- Organisation(en)
- Computergestützte Materialphysik
- Externe Organisation(en)
- École polytechnique fédérale de Lausanne
- Journal
- Digital Discovery
- Band
- 4
- Seiten
- 84-92
- Anzahl der Seiten
- 9
- ISSN
- 2635-098X
- DOI
- https://doi.org/10.1039/d4dd00162a
- Publikationsdatum
- 11-2024
- Peer-reviewed
- Ja
- ÖFOS 2012
- 102019 Machine Learning, 104027 Computational Chemistry
- ASJC Scopus Sachgebiete
- Chemistry (miscellaneous)
- Link zum Portal
- https://ucrisportal.univie.ac.at/de/publications/a22f6fbe-e3f3-4365-8384-c21453621846