https://www.science.org/content/blog-post/thinking-about-covalent-drug-reactivity
Covalent drug discovery has been making a comeback so many times over the years that they’re all starting to blur together. But by now I think we can all agree that it’s yet another perfectly acceptable tool in the kit, given the number of approved drugs that have been specifically designed with such properties (and given the number of legacy drugs that also exhibit covalent behavior!)
For those outside the business, what we’re talking about here is contrast with the “traditional” sort of mechanism where your drug binds to its protein target by reversible means. These include hydrogen bonding, pi-electron interactions, dispersion/hydrophobic interactions, what have you. But no permanent bond is formed, and there is an equilibrium between the bound and unbound state. A covalent drug, on the other hand, makes a real chemical bond to the protein it’s targeting, generally by having some fairly reactive chemical group in its structure. And while there are some reversible-covalent mechanisms, in many cases the reaction is effectively irreversible: you have modified your protein target in vivo with a chemical reagent. If this happens at the active site of an enzyme, the near-invariable result is the inactivation of that particular enzyme protein molecule (the so-called “suicide inhibitor” technique). At other sites on a protein, you can modify its activity, properties, or interaction with other proteins in all sorts of ways.
But one thing you’ll want to watch out for is that the new modified protein you’re creating is not immunogenic. Keep in mind that the active compound in poison ivy (to pick one example) causes its trouble in humans by covalently modifying proteins into species that then set off an immune response (redness, swelling, itching). And in general you also don’t want to have a covalent “warhead” that is so reactive that it hits a lot of other proteins other than your target - that increases the possibility of that immune system trouble, and it certainly increases the chances of unwanted toxicity and side effects. But done properly covalent drugs can be very effective indeed.
The thing about covalent drug discovery is that it’s been pretty empirical, and that’s even as contrasted to traditional drug discovery, which is not exactly a domain ruled by cool rational calculations all the time itself. “Try it and see” is almost always sound advice in the business, and thus the adage to never talk yourself out of an easy experiment (or an easy analog compound). There are some recent efforts to prepare libraries of covalent drug-like molecules de novo and screen these across a variety of targets, but the most common way that covalent drug candidates have been developed is the other way around: you find (by conventional means) a small molecule that binds into protein site that itself has a nearby residue that might be a partner for covalent modification. Then you add a covalently reactive warhead to your scaffold, using whatever structural information you can get to try to point it in the right direction to pick up your desired residue on the protein itself. Repeat as necessary!
One of the impressions many people have is that the molecules in these situations need to be optimized for strong binding before making that covalent jump - that’s supposed to give you better selectivity, and also allow for using a more weakly reactive covalent group in general. That also is supposed to cut down on unwanted side reactions, and you can get away with the less reactive group because ideally it’s going to be stuck in such close proximity to your desired residue and will have time to do its thing. But this paper is a useful call to rethink some of these assumptions.
The author, Bharath Srinivasan, is also reminding everyone of some of the fundamental facts about enzymes. First off, a lot of interactions between an enzyme and its natural substrate are simply not productive and don’t lead to the catalytic step that the enzyme performs - one estimate is that perhaps only one out of every ten thousand such interactions leads to a reaction (!) This means that enzymes that have higher affinity for their substrate are almost surely going to show higher rates of catalysis: the substrate is spending more time “in the zone”, and it needs all the time it can get. This takes us back to Michaelis-Menten enzyme kinetics - recall that Km is the concentration of substrate at half of an enzyme’s maximum rate as you increase substrate concentration. But, keep in mind that it doesn’t work for the substrate affinity to get too high! Enzymes work by lowering the energy of the transition state and speeding up the reaction, which means that what really counts in their affinity for the transition state (and that that’s much higher than their affinity for the substrate - or certainly for the product, which has to get the heck out of the catalytic site for the next reaction, anyway. All this means that the best inhibitor for an enzyme is a molecule that most closely mimics the structure of the transition state, and that’s a time-honored principle of drug design.
Second, there’s an upper bound to just how efficient that enzyme catalysis can get, and that’s when it gets up near the “diffusion limit”, which is the speed that molecules can physically move into and out of position. Determining that rate is not trivial, because it can (and does) vary according to the molecule and the medium. The standard number is 109 per meter per second, but protons in water can move a hundred times faster than that, while other larger molecules in more viscous conditions can easily be much slower. As the paper notes, a value of about 106 or 107 /m-sec is probably realistic under cellular conditions - i.e., well below the ideal values. Now there are certainly enzymes that have rates faster than that, but on closer inspection these seem to be either extracellular (like acetylcholinesterase) or part of multiprotein complexes where things are handed around outside of the bulk solvent world. The paper notes, though, that you should never compare the catalysis rate of an enzyme (kcat) directly with the diffusion limit (the units are different, for one thing). But a large comparison of enzymes plotted as kcat/Km shows a Gaussian distribution with a median around 105 or so, which probably does really reflect the limits of real-world diffusion.
The paper makes an explicit analogy between the relationship of kcat and Km and the relationship between kinact and KI for a covalent inhibitor. They’re quite similar indeed, with the biggest difference being that the covalent situation gradually decreases the concentration of active enzyme as things proceed! And in the same way that kcat and Km have a reciprocal relationship in classic enzyme kinetics, when you get up to the diffusion limit in a covalent setup, any attempts to increase inhibition by optimizing kinact are going to end up decreasing the noncovalent affinity as a consequence. They have to - mathematically there’s nowhere else to turn. So if you’re concentrating on increasing affinity (for example), you can probably get that into the hundreds-of-nanomolar range (more or less) without messing with the rate of inactivation. But the limits of the rate of diffusion won’t let you push it much more. The reactivity of your covalent compound is going to have to decrease as the affinity gets higher: you can’t have it all.
So optimizing a covalent inhibitor needs to be done by paying attention to both the binding affinity and the rate of inactivation at the same time - in fact, the paper recommends that for cases with rather flat, featureless binding sites (as with many “hard-to-drug” targets where people turn to covalent ideas in general!), you may well end up driving selectivity mostly by kinact, because you’re going to be hard-pressed to get the intrinsic affinity numbers up high due to the suboptimal binding sites.
Put another way: a good enzyme substrate has been evolutionarily optimized to strike a balance between binding and turnover. If the binding is too low, there won’t be enough enzyme/substrate complexes formed, and if the binding is too high, they’ll form readily but they’ll be too stable to go further! Evolution will have selected for an optimal kcat/Km. So when we’re stepping in to engineer covalent inhibitors, we should also never optimized just for binding or just for reactivity, because we too are looking for the optimum balance. And focusing on just one of those parameters with a promise that you’ll go back later and fix the other is a real mistake that the mathematics of enzyme kinetics will not take kindly to!
The paper goes into several real-world examples of these effects, and is highly recommended reading (and not just for covalent drug discovery folks, although they’ll definitely want to make sure that they’re thinking the right way about their work). There’s a lot more about benchmarking with so-called “standard” nucleophiles like glutathione that are worth a post of their own, too, (see here) but in general you shouldn’t be making too many assumptions about the reactivity of your warheads. Try them and see! It all comes down to that, once again. . .
https://www.science.org/content/blog-post/thinking-about-covalent-drug-reactivity