Alternatives assessment frameworks
A big part of implementing green chemistry in industry is the task of identifying and selecting product or process chemistries that are safer, less resource-intensive, and also functionally better than those we currently use. That involves complex judgments and comparisons with many dimensions. Figuring out how to make multifaceted comparisons to support scientifically informed judgments is the domain of alternatives assessment (AA). Anyone involved in green chemistry should be familiar with this idea.
Now is a good time to delve into this topic, because there’s a new review  by Molly Jacobs and colleagues in the current issue of EHP, which takes a deep and thoughtful look at the current state of AA methodology.
Alternatives assessment—also known as substitution analysis, technology options analysis, and a number of similar terms—is a process of evaluating and comparing technologies with the goal of identifying and implementing the preferable ones. In other words, finding safer alternatives to chemicals of concern. But AA is not singular or uniform or fully established yet: there are many ways to do it. It’s usually done within a framework that serves to structure decision-making and to enable systematic consideration of the key factors.
Analyzing alternatives assessment frameworks
The authors of the review set out to analyze twenty AA frameworks (pretty much everything there is) by breaking them down into that they see as the six major components of an AA: hazard, exposure, life cycle impacts, technical feasibility, economic feasibility, and overall decision-making strategy. The authors then looked closely at all 20 frameworks, using those six components as the dimensions of their comparison.
Some of the findings are surprising. For example, “no single hazard endpoint is consistently addressed across all frameworks reviewed.” The way that, say, terrestrial ecotoxicity is evaluated in one framework might be very different from how it’s done in another framework, or the endpoint might be completely left out. Differences in how frameworks deal with hazard include the kinds of endpoints addressed, the data sources referenced, and the metrics, criteria, and scales used for characterizing levels of hazard.
Even so, hazard assessment is generally the most well-developed component in AA frameworks, and there is broad conceptual agreement on hazard and its significance. Consideration of exposure, on the other hand, is less consistent. Frameworks differ in the role that exposure plays (when and where should it be considered, if at all?) and in how exposure is evaluated (what measures can be used to estimate, quantify, or characterize exposure?). There are conceptual challenges with other components too—like technical and economic feasibility—which relate to the contextual and situated nature of these factors (e.g., feasible for whom? under what market conditions?) and make it quite difficult for AA frameworks to provide generic methodologies.
All of this being said, AA is a rapidly developing field, likely to attract increasing attention, expertise, and resources focused on solving these methodological challenges.
Perhaps the most bedeviling component in AA is the final, integrative step where practitioners need to compare multiple technological options across multiple complex criteria and come up with some concrete insight to inform decisions (if not a winning ‘safer’ substitute). Jacobs et al. helpfully and subtly break down what exactly AA frameworks can provide here. One element, for instance, is the decision approach—a prescribed method for differentiating options by filtering, ranking, etc. Some frameworks provide no decision approach at all, some provide formal logics or informal approaches, or even a suite of decision approaches to choose from.
Ultimately, though, the expertise and judgment of a practitioner is indispensable for resolving the ambiguities and trade-offs that will arise when implementing any AA framework.
A science policy field
One point made in the review stands out to me as very significant and interesting. Alternatives assessment frameworks have a range of different goals and intended contexts of application. For example, worker protection, state policy, consumer product safety, and so on. These goals shape the technical features of the frameworks, for example, the status of exposure, the boundaries of life cycle assessment, or the relative weighting of endpoints in the hazard assessment.
This in turn begs the question, how do AA frameworks themselves influence the outcomes of alternatives assessments? Jacobs and colleagues pose this question, and note that there are not yet any studies or data available to explore it.
It’s clear that AA frameworks are not crystal orbs into which one can gaze to receive predictions of safer technological options. But they’re also not routine scientific resources—not recipes or necessarily reproducible procedures. They incorporate and depend on a variety of elements from the spheres of professional practice and know-how, business, and policy. Alternatives assessment may be (and hopefully will be) a good example of the productive and positive interaction of these spheres.
Jacobs et al. join a number of other commentators in describing AA as a ‘science policy field,’ similar to the way risk assessment is characterised. Just as risk assessment became the dominant field of expertise in the science and politics of chemical regulations in the late 20th century, it appears that alternatives assessment is becoming a practice of major importance in 21st century efforts to shift to green chemistry in industry, research, and policy.
And that’s why I say green chemists should know about it.
Jacobs, M. M., Malloy, T. F., Tickner, J. A., & Edwards, S. (2015). Alternatives assessment frameworks: Research needs for the informed substitution of hazardous chemicals. Environmental Health Perspectives, 124(3). doi:10.1289/ehp.1409581 ↩