Public understanding of climate change today is shaped less by direct observation and more by the outputs of computer simulation. In the age of simulation, modern climate models confer legitimacy upon a widely held belief that humanity is ravaging the planet’s climate. These models are often presented as a triumph of collective intelligence—a global effort to quantify planetary dynamics through data, simulation, and consensus. Together with their interpretive software, they form the machinery of a modern zeitgeist—an emergent cultural mind that uses a system of flawed reasoning to scatter falsehoods across dashboards, headlines, and classrooms in the service of sustaining this belief.
As the above figure shows, to simulate Earth’s climate, modern computer models divide the planet into a three-dimensional grid—horizontal slices across latitude and longitude, and vertical columns that extend through the atmosphere and into the ocean. Each grid cell represents a localized volume of space, within which physical equations govern the exchange of energy, momentum, and mass. These equations simulate processes such as radiation, convection, evaporation, and cloud formation, producing synthetic outputs like temperature, pressure, and humidity at discrete points in space and time. The resolution of these grids—the level of detail at which the model simulates physical processes— varies, but even the most advanced models rely on approximations and parameterizations to represent phenomena occurring at levels of detail that fall outside the limits of grid cells. What emerges, therefore, is not a direct measurement of the climate system, but a computational synthesis—an internally consistent simulation shaped by embedded assumptions and constrained by resolution, theory, and available data.
While well-grounded in physical laws, climate models incorporate these assumptions to determine how energy flows and how feedbacks occur in each grid cell. The embedded assumptions echo the flawed reasoning of greenhouse theory, and they are considered necessary concessions to dealing with the actual complexity of Earth's climate. Even though greenhouse theory does not operate outright in the model code, the key assumptions arising from that theory are treated as settled truths rather than as provisional elements of the modeling process. From radiative transfer schemes to cloud parameterizations, the model's internal logic performs coherently as it is designed to do.
But logical coherence in a model’s operation does not guarantee that the elements being manipulated are themselves logically sound or physically real. Consider the sentence, "Frog ears glow convincingly, when dragons eat unicorns". The sentence flows with perfect grammar and internal logic, yet nothing about its content is real—frogs don't have ears, “convincingly” doesn’t meaningfully modify “glow,” and dragons and unicorns don't exist. In a similar way, while not as fantastical, climate models synthesize data from which the illusion of predictability is conjured. Even though this data comes from very logical operations, how it evolves from input to output is insulated from public understanding by a set of technical terms that sound precise but often obscure the interpretive choices behind them.
Words like "parameterization", "normalizing", and "tuning" carry an aura of scientific authority, yet they frequently mask the underlying assumptions and interpretive choices that hold a climate simulation together. Let's take a closer look at what these words mean and how the processes that they describe shape the eventual synthetic data used to project future climate scenarios:
What it claims: A method for representing complex physical processes that can't be directly resolved by the model.
What it often hides:
The process being “represented” may be poorly understood or highly variable in reality.
The chosen parameters are often based on empirical fits, not first principles.
Different models use different parameterizations for the same phenomenon, leading to divergent outputs.
It’s a way of saying: "We can’t simulate this directly, so we’ll insert a rule that behaves plausibly."
Why it matters:
Parameterization is where the model stops being a simulation of nature and starts being a simulation of belief—about what matters, what can be ignored, and what can be approximated without consequence.
What it claims: A statistical adjustment to make data comparable or interpretable.
What it often hides:
The choice of baseline or reference period is arbitrary and shaped by cultural or scientific assumptions about what counts as “normal.”
Normalization can erase meaningful variation or exaggerate trends depending on framing.
It implies objectivity, but it’s often a rhetorical move: “Let’s make this look more dramatic—or more stable—depending on the story we want to tell.”
Why it matters:
Normalizing is a tool of narrative control. It’s not just about cleaning data—it’s about shaping perception.
What it claims: Adjusting model parameters to improve performance or match observations.
What it often hides:
• Tuning is often done post hoc—after seeing what the model gets wrong.
• It can involve subjective decisions about which outputs matter most.
• It risks circularity: adjusting the model to fit the data it’s supposed to predict.
• It’s rarely disclosed in full detail and often treated as a minor technical step.
Why it matters:
Tuning is where the model becomes a mirror—not of nature, but of the expectations built into its architecture. It’s not just calibration—it’s curation.
These three terms discourage deeper scrutiny by dressing projections derived from computer outputs in the vocabulary of expertise. If people don't understand the language, then they cannot question the logic. Left unchallenged, confidence in human control, thus, remains intact.
Once the processes described by these terms have done their work, the raw outputs are ready to be stylized, dramatized, and moralized. To do this, scientists use a suite of specialized software tools to extract, process, and visualize climate model outputs, including surface air temperatures that are then assembled into global temperature trend graphs. These tools are distinct from the climate models themselves and are designed for post-processing and data analysis. The following table outlines some of the most widely used tools:
During the post-processing phase, localized outputs are aggregated, normalized, and visualized in ways that amplify symbolic meaning while obscuring physical nuance. Spatially averaged surface temperatures are treated as diagnostic truths, despite their intensive nature and lack of thermodynamic coherence. Smoothing algorithms, anomaly baselining, and selective framing dramatize change, converting statistical artifacts into moral narratives. Trend lines acquire emotional weight; color gradients imply urgency; and scenario ranges are reinterpreted as predictive certainties. In this phase, the illusion gains momentum through the choreography of interpretation. The data is no longer just simulated—it is ready to be presented as evidence of planetary distress.
To be perfectly clear, this evidence doesn't arise fully-formed inside the model—it evolves outside, after the simulation is run. In other words, evidence that the public sees is the culmination of a multi-phase workflow, using, not one, but three tools:
Climate models generate raw data—gridded fields of temperature, pressure, radiation, etc.—typically in formats like NetCDF or GRIB.
Post-processing tools extract surface air temperature fields, compute spatial averages, and generate time series.
Visualization tools then plot these averages over time to produce the familiar global temperature trend graphs.
Simulation: Researchers run the model to generate synthetic data across space and time.
Extraction: Analysts extract surface air temperatures from each grid cell.
Aggregation: Modelers or data specialists average the extracted temperatures—typically using area-weighted methods—to generate a global mean.
Visualization: Technicians or communicators plot the global mean over time to illustrate warming trends.
Visualization is probably the most powerful element of the workflow, as the following techniques are used:
These are mathematical techniques used to reduce noise or short-term fluctuations in data. In climate graphs, they often involve moving averages or spline fits that make jagged lines appear more fluid.
Why it matters:
Smoothing can visually exaggerate long-term trends or suppress variability, subtly guiding emotional interpretation—e.g., turning a bumpy dataset into a clean upward slope that implies inevitable planetary overheating.
This refers to the practice of expressing temperature changes relative to a chosen reference period (e.g., 1951–1980). Instead of showing absolute temperatures, graphs display deviations from this “normal.”
Why it matters:
The choice of baseline is arbitrary—it frames what counts as “normal.” Depending on the reference period, it can make warming appear more or less dramatic. It’s a rhetorical device disguised as a statistical convention.
This involves choosing which data to include, exclude, or emphasize—such as highlighting certain regions, time spans, or scenarios. It also includes visual choices like color gradients, axis scaling, and annotation. With axis scaling, even though the numerical data is technically correct—in terms of smaller values preceding larger values—the visual spacing between those values is deliberately expanded. This is graphic arts deception: a design maneuver that amplifies fractional changes, converting subtle shifts into steep slopes that imply urgency, certainty, and moral consequence. The following IPCC temperature graph illustrates the technique:
Why It Matters
Framing shapes perception. A graph that zooms in on recent decades, uses fiery reds, or omits uncertainty bands tells a very different story than one that includes full historical context and cooler tones. It’s data designed to sell an idea.
It’s critically important to understand that the quantity plotted on the y-axis of the above IPCC graph—Global Mean Surface Temperature (GMST)—is not a real temperature. It is a temperature-like abstraction, four layers removed from physical meaning.
First, GMST is derived by spatially averaging temperature readings across regions so unalike that there is no basis for comparison. This violates basic thermodynamic principles, because temperature is an intensive quantity—it cannot be meaningfully added, subtracted, or averaged across an area as vast and disconnected as the whole planet. Foundational thermodynamic text books clearly explain this:
Çengel, Y. A., & Boles, M. A. (2015). Thermodynamics: An Engineering Approach (8th ed.). McGraw-Hill.
Intensive properties are independent of the mass of a system... These properties cannot be added directly when combining systems.
Sonntag, R. E., Borgnakke, C., & Van Wylen, G. J. (2003). Fundamentals of Thermodynamics (6th ed.). Wiley. Temperature is an intensive property. It does not depend on the amount of substance and cannot be added or averaged without reference to energy exchange or equilibrium.
Incropera, F. P., & DeWitt, D. P. (2002). Introduction to Heat Transfer (5th ed.). Wiley.
Adding temperatures from different systems is physically meaningless unless the systems are in thermal equilibrium and the operation reflects an energy balance.
Consequently, the GMST is not a temperature in any physical sense—it is a statistical artifact.
Second, the graph does not even show pseudo-averages directly. It shows anomalies—deviations from a chosen reference period. This is a second-order abstraction: a difference from a constructed norm, not a direct measurement.
Third, the reference period itself (e.g., 1850–1900) is a third abstraction—constructed through a recursive averaging process that compounds the original thermodynamic error. To build this baseline, climate analysts first compute a spatial average of temperature readings across the globe for each year in the reference period. This is already a violation of thermodynamic principles, since temperature is an intensive quantity—it cannot be meaningfully averaged across disconnected regions. Then, they use those yearly spatial averages—each one already a statistical artifact—and average them again across time to produce a single reference value. This second averaging embeds the original error inside a new one: a temporal average of spatial averages of an intensive quantity. The result is a baseline that appears precise but is conceptually incoherent. It is not a physical norm—it is a meta-average of pseudo-temperatures, constructed through a recursive misuse of statistical operations that have no thermodynamic legitimacy.
Fourth, the anomaly values are plotted on an axis labeled as a temperature, implying that they are real thermodynamic quantities. But they are not. They are differences from a statistical baseline, rendered as if they were absolute temperatures. This final abstraction assigns physical meaning to a quantity that has none—not in any thermodynamic sense, and certainly not in the way humans experience temperature. Yet it appeals to the human sensation of warmth and cold by using a familiar word—temperature—whose definition has been quietly transformed through the previous three abstractions.
In this way, the graph presents a “temperature” that the globe feels, but humans cannot. It is not sensed, not measured, not lived. It is performed—a rhetorical device masquerading as physical truth. What began as a local, physical property is no longer a measure of heat—it is a symptom of planetary distress.
In short, the GMST y-axis is a quadruple abstraction:
A spatial average of an intensive quantity.
Temporally smoothed and averaged.
Defined by a selectively chosen baseline.
Presented as an anomaly from that baseline as if a real temperature.
What the public sees as “global temperature” is not a measurement, but an artifact of a recursive erroneous process—a non-permissible average of a non-permissible average of a non-permissible average. First, local temperatures are extracted from climate model outputs and averaged across a grid of thermodynamically disconnected regions. Then those spatial averages are averaged again over time—monthly, yearly, multi-decade—each layer compounding the abstraction and erasing physical meaning. A baseline is constructed from these already-averaged quantities, embedding the original violation in a meta-average. Finally, deviations from this baseline are plotted as anomalies and rebranded as degrees Celsius. What emerges is not temperature, but a symbolic construct engineered to dramatize change—a statistical echo mistaken for thermodynamic truth.
This artifact of recursive error does not remain confined to technical reports—it is promoted, visualized, moralized, and monetized across institutional domains. Agencies publish it as climate truth, academia embeds it as a foundational metric, and media amplify it into planetary crisis. Each domain reinforces the illusion through a shared convention of treating symbolic temperature as if it were thermodynamic fact. The result is a self-reinforcing system in which layered, self-validating errors become the dominant standard for representing what humans claim to know about the world.
Governmental and intergovernmental agencies—NOAA, NASA, IPCC—play a central role in shaping public understanding of climate change. They publish symbolic temperature as climate fact, embedding it in executive summaries, policy briefs, and public-facing dashboards. The recursive averaging process behind the global mean is rarely disclosed, and the symbolic nature of the construct is obscured by authoritative presentation. Through repetition and institutional endorsement, symbolic temperature becomes a default truth—legitimized not by thermodynamic coherence, but by bureaucratic consensus.
Simply put, these agencies curate belief. By presenting synthetic averages as empirical reality, they convert simulation into an illusion of certainty. They sustain the illusion by convention—a grand deception rooted in inherited frameworks where symbolic temperature stands unquestioned as an indicator of planetary health, despite its lack of physical validity.
Once symbolic temperature is constructed, it is staged within visual artifacts designed to dramatize change. Graphs, heat maps, and animated timelines do not merely represent data—they perform it. Color gradients intensify emotional impact, truncated axes exaggerate trends, and baseline choices amplify deviation. These design decisions are intentional aesthetic interventions that shape perception.
The result is a spectacle parading as precision, where synthetic averages play the starring role of planetary truth, complete with decimal points and smooth curves. The illusion of continuity and coherence gains reinforcement through visual polish that masks the recursive error beneath. What the public sees is a curated drama—engineered to persuade. In this theater of alarm, symbolic temperature becomes more of a mood than a metric. Consider the four examples in the following illustration:
Top Left Panel — NASA Global Anomaly Map
A world map titled Annual JD 2024 L-01(°C) Anomaly vs 1951–1980 uses emotionally-charged colors to evoke alarm over deviations from a statistical baseline. Though derived from temperature readings at specific sensor locations, the construct being displayed is neither temperature nor temperature anomaly in any physical sense. It is a processed abstraction—an averaged departure from a chosen reference period (1951–1980), which itself is arbitrary, yet treated as normal. It is interpolated across regions where no measurements exist. The result is a global field of symbolic temperature, visually implying thermodynamic coherence where none physically exists.
Top Right Panel — Modernized Hockey Stick Graph
This updated version of the iconic “hockey stick” graph plots the same pseudo-anomalies described earlier alongside rising CO₂ concentrations, spanning the years 1880 to 2000. A shaded uncertainty band and dual-axis design lend an aura of precision, while the graph’s upward sweep dramatizes correlation with rising CO₂ concentrations—without clarifying causation. Crucially, the graphic-space proportioning exaggerates visual steepness of the curve: two tenths of a temperature-like abstraction on the y-axis occupies nearly half the vertical space as two hundred years on the x-axis. This distortion gives the appearance of a sharp rise over time, despite the underlying change amounting to less than one degree of a construct that is not, in fact, a temperature. The visual grammar suggests inevitability of planetary crisis, reinforcing urgency through the use of an aesthetically pleasing trend graph rather than through the coherent logic of proper thermodynamic reasoning.
Bottom Left Panel — Lipponen Country-Level Anomaly Chart
Adapted from Lipponen’s work, this panel uses circular icons to represent pseudo-temperature anomalies by country for year 2017. Despite its detailed appearance, the chart's uniform color gradient homogenizes vastly different regions of the world into a single false visual language, concealing the incompatible methods, measurement contexts, and data resolutions behind each national icon.
Bottom Right Panel — NOAA Bar Graph of Global Surface Temperature
This bar graph plots annual pseudo-temperature anomalies from 1880 to 2000, with red bars above and blue bars below a statistically constructed baseline, chosen arbitrarily. The visual rhythm implies a coherent global temperature signal, though the underlying data are stitched together from dissimilar sources—each with its own methods, contexts, and uncertainties. The baseline itself is a statistical artifact, yet it anchors the illusion of planetary fever.
Within academic institutions, symbolic temperature is universally accepted, but even more, it is moralized. It becomes a pennant of planetary harm, a metric of ethical urgency, and a justification for intervention. Researchers cite it as evidence of crisis, educators embed it in curricula, and grant proposals treat it as foundational. The recursive error behind the metric is rarely questioned; instead, it is shielded by moral framing. To question symbolic temperature is to risk being cast as indifferent to suffering or complicit in delaying progress.
This moralization emerges out of habit. The metric is passed down from generation to generation, as a tradition governed by disciplinary norms, institutional incentives, and the pressure to act. Symbolic temperature becomes an ethical shorthand: a number that signals care, concern, and commitment. But in doing so, it displaces inquiry with allegiance. The illusion of global temperature and its steady rise is sustained by a moral imperative—where critique becomes a taboo of denial, and fidelity to the metric becomes a litmus test for virtue.
Once symbolic temperature is staged in official diagrams, it enters the domain of finance, where climate-change crisis becomes currency. Here climate models feed into risk assessments by way of temperature anomalies derived from their post-processed outputs, which become inputs for valuation models and investment screening. Diagrams designed to convey urgency also drive capital toward climate-linked financial instruments—these include green bonds, carbon offset portfolios, ESG-indexed funds, and catastrophe-linked derivatives. Thus, the diagrams that once showcased official belief now serve as data entries for pricing algorithms, emissions accounting, and portfolio construction.
The global temperature that official agencies visualize and dramatize drives a climate-change economics built on carbon fear. These agencies—along with academia and the media—use a minuscule change in a temperature-like abstraction, entirely removed from human sensation and explicitly disassociated from it, to portray fossil fuel use as planetary harm. A culture of climate finance arises from this anthropogenic portrayal, determining who receives funding, policy support, and the designation of serving the public good. Financial instruments tied to this non-temperature “temperature” favor projects that treat climate change, framed as human fossil-fuel abuse, as a solvable technical problem—something to be quantified, modeled, and managed. Projects that align with modeled temperature reductions—renewable energy, carbon capture, reforestation—are funded not because they resolve or even engage with any real crisis, but because they conform to the metrics used in official climate models. In this way, climate finance rewards projects that offer solutions to fabricated crises, while deflecting attention from the actual conditions that degrade human life: overcrowded cities, fragmented food systems, and the infrastructural clutter that degrades air, space, and life.
In an age where information technology has collapsed the space and time between communication, the sheer speed of idea propagation now exceeds that of critical evaluation. Foundational principles—once slow to form and slower to revise—are now bypassed by ideas that gain wide acceptance through repetition, not rigor. A favored point of view, once subject to detailed scrutiny, can now achieve dominance without ever passing through deep analysis. Such has been the case with the global mean surface temperature (GMST). The individuals responsible for conceiving of this metric and endorsing it either knowingly avoided, or never fully understood, the thermodynamic grounding that invalidates its construction. As a result, their expert status has enabled it to be absorbed by institutions and subsequently extended into public knowledge, official policy, and hard infrastructure, resulting in a modern collective consciousness shaped—indeed distorted— by carbon fear.
When a flawed idea like this—built on erroneous premises and abstractions—gains institutional momentum, it can propagate with unprecedented force into the structure of everyday life. The consequences of such propagation, therefore, are not confined to theoretical discussions. Instead, policymakers, government officials, and academic leaders squander vast resources that could be directed toward real civilizational improvements. Cities grow denser, more cluttered, and more fragmented from their food sources, because symbolic metrics have displaced spatial reasoning. Air pollution—a real problem—becomes falsely conflated with CO2 levels that supposedly drive the crisis effect symbolized by the global non-temperature “temperature.” Consequently, air quality is addressed through carbon-emissions accounting rather than urban redesign. Similarly, food systems are restructured to satisfy carbon targets rather than nutritional integrity or humane farming practices. So-called green-energy infrastructure expands to comply with modeled efficiency guidelines that bear no relation to lived experience, thus failing to serve actual human needs and, worse, creating more problems than intelligent solutions. The result is a world increasingly governed by abstractions, where numbers posing as measures of care drive pretend industries that replace the practical mechanics of true progress.
This is not the failure of science, but its misdirection. Minds trained in technical precision have fallen into the service of a narrative that mistakes simulation for insight and consensus for truth. The climate crisis, as popularly understood, is not a conscientious awakening—it is a fabrication. Its diagrams and its urgency are products of a modeling regime that has mistaken a non-temperature for a measure of planetary health. The damage already done, along with the harm still to come, is not merely environmental—it is intellectual, cultural, and ethical.
The danger of scientists disconnected from foundational axioms lies in the capacity of their meticulous minds to convincingly imitate true understanding. When their expertise in climate modeling diverges from their responsibility to work with real physical measurements, and when the political momentum of their collective fellowship overrides fidelity to scientific rigor, they become experts in building systems that distort reality. A civilization that rewards such precision without wisdom will not collapse from ignorance, but from the force of its own misguided brilliance.
# # #
[Written with assistance from Microsoft Copilot AI]