The ability to implement drug therapy in a safe and rational manner necessitates an understanding of the factors that cause this variability. One of the most important factors is the concentration of drug that is achieved at the site of action.
Any discussion of pharmacokinetics presumes that the drug concentrations can be determined with a high degree of accuracy and precision. One of the most frequent causes of high variability in pharmacokinetic parameters is poor data resulting from imprecise analytical procedures. Evaluation of pharmacokinetic data in the literature must begin with an assessment of the validity of the assay used under the conditions in which the study was conducted. An assay must be tested for specificity, sensitivity, reproducibility, stability, and accuracy. Because drug metabolites are frequently present in the fluid to be measured and are similar in structure to the parent compound, differentiation of drug from any putative metabolites must be ensured.
The quantitative response to a drug depends highly on the concentration of drug at the site of action. In most situations one cannot quantify drug concentration at the actual site of action. Rather, drug concentrations are measured in an easily accessible site that is believed to be in equilibrium with the site of action (e.g., blood or one of its components).
Blood (or its components, plasma or serum) represents the most frequently sampled fluid used to characterize the pharmacokinetics of drugs. Drug concentration in the blood is the sum of several processes.
Initially visual characterization of the processes controlling the concentration of drug in the blood can be made by constructing a drug concentration versus time profile (i.e., a plot of drug concentration in the blood versus time).
One of the primary objectives of pharmacokinetic models is to develop a quantitative method to describe the relationship of drug concentration, or amount in the body, as a function of time. The complexity of the pharmacokinetic model will vary with the route of administration, the extent and duration of distribution into various body fluids and tissues, the processes of elimination, and the intended application of the pharmacokinetic model. There are a wide variety of potential uses for pharmacokinetic models, which include:
1. Prediction of drug concentration in blood/plasma or tissue.
2. Calculation of a dosage regimen.
3. Quantitative assessment of the effect of disease on drug disposition.
4. Elucidation of the mechanism of disease-induced alterations in drug disposition.
5. Determination of the mechanism for drug-drug interactions.
6. Prediction of drug concentration versus effect relationships.
If a drug is administered intravenously in a single, rapid injection, the process of absorption is bypassed. The time for this injection is typically so short compared with other pharmacokinetic processes that it is ignored. As previously described for a one-compartment model, peak plasma concentration and distribution equilibrium are achieved instantaneously.
Drug absorption depends upon both the physicochemical properties of the drug (pKa, dosage form, partition coefficient) and the physiology of the site of absorption (surface area, blood flow). Most drugs are absorbed by simple diffusion, and the kinetics are first-order. Zero-order absorption occurs for some processes that are saturable and for sustained-release dosage forms. Absorption and elimination of a drug are a sequential process, and the rate of change of drug in the body is the difference between the rate of uptake (absorption) and rate of efflux (elimination).
The total clearance of a drug from the body almost always involves more than one organ of elimination. The anatomy of the human body is such that the clearance from the composite clearing organs occurs in parallel and is, therefore, additive.
Physiologists studied the renal clearance of endogenous and exogenous substances long before the use of clearance concepts became popular in pharmacokinetics. Indeed, the basis for the understanding of drug clearance in pharmacokinetics has its roots in the decades of work by renal physiologists.
Moreover,there are significant differences in the complexity of processes involved in hepatic and renal handling of drugs. In the kidney there are three primary processes (and one minor process) responsible for the renal elimination of drugs, namely filtration, secretion, and reabsorption (and metabolism), respectively. Each of the major processes is affected by common, yet unique, determinants.
The renal clearance of a drug that is eliminated only by filtration can be estimated if the glomerular filtration rate and the free fraction of drug are known. There are two substances commonly used to estimate the GFR, namely creatinine, an endogenous by-product of muscle metabolism, and inulin, a polysaccharide. Both are essentially 100% eliminated in the urine by filtration and, thus, the CLT of these two substances can be used as reasonable estimates for the GFR.
Another primary process involved in the renal elimination of drugs is active tubular secretion, ATS. There are several activetransport systems in the proximal renal tubule that are capable of excreting drugs from the flood into the urine.
Renal clearance is the volume of drug contained in the plasma that is removed by the kidney per unit time. Units for renal clearance are expressed in volume per time (e.g., milliliters per minute or liters per hour). Renal clearance may be measured by dividing the rate of drug excretion by the plasma drug concentration.