The Great Science Breakdown

Musings on the state of modern science

Early explorers believed the world had boundaries beyond which there were dragons. In some sense, this mythical warning served to moderate the expectations of those who ventured too far afield from the known world. Today we admire and praise the explorers who challenged this superstitious view of discovery. Modern science has taught us to believe in a practically boundless universe in which there are no limits to human knowledge. By dispelling myths of dragons and demons beyond the limits of what we know, the modern experience of science has been both liberating and exhilarating. But this sense of freedom and achievement is tempered by a growing discomfort about how we can continue to sustain and build on the past 400 years of scientific achievements and technology prowess without courting disaster. This unease is expressed by an emerging crisis of confidence in whether science can serve as a foundation for our future survival and success.

Thirty years ago last summer I accepted a job as a research scientist. This move was a turning point in my life. It was a huge risk for me because I would be an outsider of sorts given my background. I had originally studied architecture and co-founded a successful software start-up that was part of the larger architecture, engineering, and construction revolution in computer-aided design. But I was bored by the technology I was developing, disillusioned by the business world, and looking for something more meaningful to do.

My job was in a laboratory that is part of an important complex of scientific research institutions. It was in this rarefied atmosphere that I learned to apply my skill at translating abstract scientific ideas into practical engineering solutions. Although I was not originally trained in the sciences, I took to my new trade with the deep passion and naive enthusiasm that most young people have. I thought that if I applied myself, I could leave the world a better place than I had found it. I set about learning how to develop novel methods and techniques for evaluating whether a new technology really works as intended at large scales, and I learned how to market and sell science itself as a solution to the many emerging global problems.

For example, although it wasn’t widely accepted at the time, climate change was to be an important problem we would work on. In the 1990s there was very little discussion of this problem among my laboratory’s leadership. The lack of direct evidence of a problem led far too many scientists to be skeptical of climate science, or at the very least extremely hesitant to lend their support to immediate action on climate change. It was my first direct exposure to the often baffling but well-founded principles that underpin the scientific method. In this particular case I experienced how the absence of evidence is not evidence of absence and how changing human behavior can require nearly twice the positive input as negative input to achieve the same outcome.

As an engineer-turning-scientist, with an interest in energy efficiency in buildings, my awareness of climate change challenges grew. My research remained mostly focused on developing solutions, engineering tools, and building the toolkit that would help make buildings more energy efficient and responsive to occupant’s needs and comfort. While my work was focused on using the latest computing technology to simulate very large complex systems like power grids, my scientific re-education was focused on being able to discern what problems really existed, and whether extant solutions really worked.

In many ways, I was following a path blazed by other non-engineering fields in the previous decades. Although we were working with systems at different scales than most other fields of research, we often dealt with similar challenges. Very large, very complex, and very diverse interconnected and interdependent systems with inputs that defy predictions, exhibit complex behaviors arising from interactions, and path dependent outcomes pervade these fields. As I learned how other scientists dealt with such systems, I grew more comfortable and adept with the toolkit of the scientist. I became proficient enough in the scientific approach to problem solving to change my approach to engineering. Instead of the top-down design approach preferred in engineering, I learned how to design systems that work from the bottom-up, using complex behavior to do what is done in so many other fields.

The change from top-down thinking to bottom-up has important consequences on how one thinks about problem solving. One goes about the process of finding and using the evidence differently to show that a solution really works in the real world. The scientist’s approach is data-driven, relying on hypothesis testing, and proof by failure. The engineer relies more on requirements-driven design, modularity and component reuse, and proof by success. The most important thing one learns is that each approach has its place, and one should be very careful not to mistakenly apply the wrong one to a particular problem. The scientist may be unable to solve an engineering problem because they cannot use the scientific method to prove the solution works, and the engineer may erroneously claim a solution works because they start with a premise that cannot be subjected to rigorous falsifiability tests.

But after years of working with this new more integrated and flexible approach to scientific engineering, I realized that all was not well in the sciences. (That is not to say that all is well in engineering, but that is the subject of another diatribe.) Looking back over the past thirty years I now can see a number of significant and deeply disturbing trends in the sciences. These observations have culminated in a personal crisis of confidence and a deep sense of despair that we can no longer expect science to save us from ourselves unless things change dramatically and quickly.

Scientists are not as respected as I once thought they were. In recent decades the incidence of scientific error and fraud, as well as our ability to detect it and willingness to call it out have grown significantly. This has led to a significant loss of credibility and a real struggle by scientists to remain relevant. In addition, there has been an erosion of academic freedom in important scientific institutions. For example, unlike when I started out, most national laboratories now have clauses in their contracts that give the government the ability to block publication of results that conflict with the current administration’s policy objectives. Even when there are no external pressures to conform, most universities and corporations place their brand names above the need to ensure scientific freedom and integrity. Moreover, the scientific process has been contaminated by more popular and approachable pseudo-sciences to satisfy public and commercial demands for mass entertainment and products that appeal to the public’s more mundane needs and desires, such as astrology and quick remedies for aging. Finally, science has a history of committing significant errors and missteps that have led to horrifying experiments and technology-enhanced atrocities which scientists still struggle to live down and are forewarned to in their human subjects training and oversight.

Many scientists also do not respond appropriately to the pressure to produce results. The principal investigator’s role in securing and administering funding for research is critical to how modern scientific results are achieved. They lead teams of graduate students, postdocs, and fellow scientists by setting expectations, generating new ideas, and guiding them around obstacles. But the drive to achieve positive results can give rise to tension and stress in teams when the data fails to support the need to sustain ongoing funding. The temptation to make the data support the goals can become too great for some researchers when principal investigators push scientific teams too hard. Researchers can be driven to make mistakes, cut corners, and falsify results. Moreover, funding agencies and investors do not understand how to review proposals or know how to evaluate risks. They often defer to expert insiders who are too conflicted, competing for the same pot of money, too busy, or simply unwilling to point out when a proposal is flawed in a way that creates the conditions for or incentivizes laziness, error, and fraud.

In recent years, commercial interests in scientific publishing have strayed from the original purpose and methods of academic journals. Modern academic publishers have an overweening profit motive. Leading academic journal revenues are generally far higher than other periodicals and none of their profits are returned to those who produce the content. Moreover, by charging for limited access to papers publishers suck resources out of the academic community and restrict access to the most valuable commodity in science, knowledge. Too many important journals are predominantly motivated by metrics that measure raw citation counts rather than the fraction of papers that are highly cited. The leading journals choose the performance metrics we use so as to keep potential competitors from entering specific academic domains. Open-access models have been introduced to break the stranglehold of the leading publishers. But this model has given rise to an even more insidious pay-to-publish and archaic page charge regimes. In both regimes, there is an emerging implicit collusion among reviewers and authors that is facilitated by subtle signals such as citations and jargon, as well as technically prohibited back-channel discussions. Finally, there is a lack of humility among editors who themselves are often rising academics within the field and are trying to secure their standing within their home institutions and the wider field, and don’t want to offend or confront collaborators and reviewers by rejecting their papers or accepting papers which challenge a potential future counter-party’s work.

The scientific method is no longer being applied as often, strictly, and diligently as it should. It is not unusual to read academic papers that present hypotheses which are not falsifiable or fail to present evidence of falsification. Some scientists conduct research in a biased way, looking for evidence to support preconceived beliefs, ignoring evidence that contradicts them, or failing to report contradictory evidence. Other scientists accept funding from sources that seek to selectively report their findings, retain results for strictly private commercial use, present only results that support their objectives, or suppress results that run counter to their agenda. In addition, far too many papers present results that cannot be replicated or are based on analysis methods that cannot be used to support the claims. Some of these papers have given rise to myths that pervade popular culture despite being unproven, or demonstrably false. These problems stem not only from flaws in the research methods, but also failures of the review process. Reviewers sometimes fail to identify critical errors and biases in the application of the scientific method, and when they do editors sometimes do not follow the reviewers recommendations. Even when these errors are detected after publication, journals are hesitant to seek or publish retractions, or even acknowledge the role of the reviewers and editors in enabling erroneous or fraudulent results. Many journals protect themselves by favoring articles from major institutions, in spite of evidence that errors and fraud occurs even in the most exalted halls of academia.

Science has also enjoyed the benefits of a cult of personality and seems to thrive on cultural icons. Do we not revel in a classic Einstein T-shirt, relish movies about famous scientists like Marie Curie, and all recognize well-known depictions of fictional scientists such as Dr. Emmett Brown in Back to the Future, Dr. Ian Malcolm in Jurassic Park, or Prof. Henry Jones in Indiana Jones? The success of science during the industrial age gave rise to the scientist persona, which seems increasingly irrelevant and arguably counterproductive in the information age. No longer does a modern-day Einstein or Edison seem possible or even desirable. Scientific individualism and exceptionalism are dishonest or unhelpful means of attracting young talent to the sciences when the rising generation of researchers seem more concerned with addressing major issues and challenges than being famous and popular. Today’s icons of science and technology are more often very wealthy and inevitably flawed individuals whose personal predilections and media-centric behavior divert attention from the important scientific research and technology development with which they associate themselves. Rather than investigating the methods and results of the work they lead, we pay more attention to their social media coverage and political activities. When scientific leaders are not icons of science and technology in full view of the public, they are often beneficiaries of privileged immunity from criticism because of the past success they have enjoyed under what is arguably a flawed regime. The scientific personality as a cultural icon does a disservice to science. And the concept of scientific infallibility and its countervailing trope of scientific failure leading to disaster are a false dichotomy that is undermining our ability to focus on remedies to the flaws that do exist in the sciences and preventing us from benefiting from the strengths that science does have to offer.

Unfortunately science also can be very profitable and has been placed in the service of institutions, business, and governments. Academics are afraid to criticize their own institutions from within when these fail to meet standards of scientific conduct or suffer from maladministration. These researchers’ advancement is always contingent on being a well-liked teacher and team player in their department, rather than being a good and honest scientist even if it means challenging their students and confronting their colleagues, and most especially the leadership of their own institutions. In industry, where science enables significant innovation, particularly in chemical, electrical, and aeronautical engineering, science is rightly viewed as a critical economic prime mover with tremendous impact on the wealth and welfare of nations. However, this places scientists in an unfamiliar and somewhat conflicted role as promoters of national prosperity instead of the gatekeepers to the truth. Scientists are not trained for this dual role, nor are scientific leaders identified or chosen for their ability to perform the former effectively without foregoing fidelity to the latter. As various social groups realize the power of science in either achieving their aims or confounding their opponents’, political parties either seek to bring scientists to heel or discredit them, neither of which is conducive to good science.

As with any breakdown, a solution may exist, or in this case, solutions may exist. First, we must hold scientists accountable for violations of trust. At present, scientists can face legal and professional penalties for intentionally misrepresenting research findings and using dishonest research methods, including fabricating data, manipulating results, or plagiarizing others’ work. But instances of real accountability remain rare, and many cases of scientific fraud go unreported and unpunished. Not only do we need to hold scientists more swiftly and publicly accountable for violations of trust, we also need accountability from the organizations and institutions that promote a culture of unscientific behavior, ignore misconduct, and cover-up incidents when reported by their researchers.

Second, we must embed a robust ethical and rigorous moral foundation in scientific education and training. Including more lessons and exposure to the ethical and moral implications of scientific research throughout science curricula can help students understand the potential consequences of their work. Through more thoughtful and integrated training, young researchers can learn the importance of conducting research and applying methods in a responsible and ethical manner. We can encourage students to think critically and question their teachers and supervisors on the ethical and moral aspects of scientific research. This will develop the skills they need to make thoughtful and ethical decisions about their work, without falling into the overly simplistic and intellectually lazy philosophical traps of convenient and superficial thinking about science and its role in society.

Third, we must change the structure of academic publishing and the culture of paper reviews. None of the various review strategies in vogue have successfully protected us from fraudulent publications. At present reviewers are not accountable for their reviews, whether negative or positive. Why are none of the reviewers who supported a fraudulent paper asked to account for the failure? We must remove the incentives for journals to protect their publication decisions, declining to publish contradictory papers and retractions despite overwhelming evidence. Without a metric for retracted and contradicted papers, journals will continue to prefer the more profitable publishing of quantity over the more valuable publishing of quality. One important review practice that was abandoned decades ago, but served this purpose, was the recommendation review, in which the reviewers would publish with attribution their reviews in support of (or conceivably in opposition to) a paper as an addendum to the paper. This practice meant that reviewers were willing to be accountable for their role in assuring the quality and accuracy of the results presented in the paper.

Fourth, we must disarm business and political interests from science funding. Corporate and policy-motivated science is arguably not science insofar as the motives and goals are predisposed to particular outcomes rather than the simple truth of a matter. Academic institutions must decline all research funding that comes with strings attached, is contingent on certain results, or is subject to pre-publication review or approval by the funding source.

Fifth, we must end the cult of personality as a tool for popularizing science, and promoting public interest in science. Early in my career I was told that successful scientists are so still when in the laboratory that they can hear the universe whispering its secrets in their ear. I now know that my mentor was not referring to what I say, but to my demeanor and humility when practicing science. Science is not about the people who conduct research, it’s about the process of discovery itself, the self-discipline and the hard work that it requires, and the self-effacing demands it places on the people who commit their lives to it. Science has no place for self-aggrandizement, egos, and popularity contests.

Sixth, we must end the system of prizes and awards for scientific achievements. For one thing they place too high a value on things that are not important in science. Scientific progress is not measured in prizes, plaques, and titles, and the public should not be given to the idea that it is. The current culture of rewards and recognition supports cults of personality, enables an insidious pursuit of recognition by elite institutions, and may run counter to the idea that science is something researchers should do for its own sake. Instead, we must promote a culture of personal humility, instill disdain for the limelight, and teach researchers to refuse to work from those who seek to use science for purposes that run afoul of basic moral and ethical guards. Finally, we must discard the notion that big science is the only way we can solve big problems. Some of the greatest achievements in science were born in serendipity, grown in anonymous laboratories, and not funded by anyone who asked for them in the first place.

Finally, we must commit to the history books the myth of scientific infallibility and the countervailing trope of scientific failure leading to global cataclysm. Both of these cultural touchstones have done tremendous harm to the cause of science and its ability to serve humanity. Science is not less fallible than are the people who conduct it, nor is it more likely to cause disasters than are the people who use its results. The scientific process is simply a tool for finding the truth. What truth one seeks and how one uses it is another matter, and is not really in the purview of researchers or those who fund it. Society as whole must change the culture of how science is perceived and understand that when something bad happens it is not the scientific process, but the people who use it who have failed and must be held accountable. When we learn we can do something, that does not mean we should do it, and science is not to blame for making it possible. Attacking or undermining science for our individual and societal shortcomings is lazy and simplistic, and will strip us of our ability to improve ourselves, solve the problems we face, and better conditions for all humanity. We must learn to support science unconditionally and hold people accountable for how they use it in the same breath.

My criticism of science in no way absolves other fields of responsibility for their challenges and accountability for their failings. In my own training, I found that the god complex in engineering not only leads engineers to solve problems preferentially from the top down rather than from bottom up like nature does. It also leads engineers to have an overblown sense of power over everything they do and touch. Clearly engineering hubris has consequences, as the many epic engineering failures of recent history demonstrate so well. But I think one can reasonably argue that engineering failures tend to be found and addressed quickly and directly, especially when their impacts can be very dramatic, extremely costly, highly embarrassing, and profoundly frightening. Such things can be said for almost every field that depends on science in some way. We can all learn from the mistakes of others, and hope that our careers will serve as more than just a warning to others. We need science to be given the opportunity to avail itself of this chance to redeem itself without the encumbrance of political, economic, and social pressures that have nothing to do with the root causes of the problems.

In contrast with other fields, scientific failures are often much more insidious, more long-lasting, and more difficult to correct. It is for this reason that I think we are obligated to place a higher burden of integrity and trust in the scientific process than we do any specific field of science. Science is the mother for nearly all other fields of importance in modern society (except perhaps mathematics which seems to exist in an untouchable plane far above the sciences that depend on it). If scientists cannot be trusted, then science cannot be trusted, and we undermine the foundation of nearly every important field of study and its practice. Science must regain the public trust if we are to avoid calling all medicine, chemistry, biology, economics, physics, engineering, sociology, politics, and government into question.

Thar be dragons.

Written on December 11, 2022