Negotiate with journalists all you like; the editor has the last word
On January 29 I received the following invitation from Quanta journalist Jordana Cepelewicz:
Hope all is well! I'm a journalist at Quanta Magazine, and I'm writing an article about whether (/ how) AI might change the aesthetic nature of the mathematical enterprise -- about how creative and abstract aspects of math could be fair game to automate, and how that might affect what mathematicians consider "beautiful" or "elegant" or "natural." (The historical example that comes to mind is that at one point, doing something like approximating pi was considered beautiful math; now most people would just consider it computation.)
Mindful of my most recent experience with Quanta — which led directly to the creation of this newsletter — I replied that I would be interested, but only under certain conditions:
I'm happy to talk with you in two weeks or so, but I want to make sure I'm not cast as the token skeptic, as has happened more than once in the past. If you need suggestions for other colleagues, philosophers as well as mathematicians, who have thought carefully about just such questions, I'll be happy to provide some names.
The journalist readily agreed:
Thanks for your quick reply, and for alerting me to your concern -- I'll be careful not to frame things that way.
And yes, thank you, I'd love to hear your suggestions for other mathematicians, philosophers, and historians who have thought about this.
And thus it came to pass that, on Feb 24, 2025, I shared my thoughts with the journalist on a Zoom call lasting well over one hour, starting at 02:00 PM Eastern Time. To help Quanta not confine me to the role of token skeptic, I offered to find a few mathematicians who would confirm my contention that indifference is the attitude of the vast majority of my colleagues to the drumbeat of media predictions that AI will transform our practice. This contention had been confirmed by my experience at the Seattle Joint Mathematics Meetings in early January, where several mathematicians, most of whom I did not know, came up to me to thank me for continuing to write this newsletter — for my willingness to express how they felt about all the AI propaganda. I was grateful for their reaction but I told them that, rather than thanking me, they should make their opinions known. And now Quanta accepted my offer to give some of them this opportunity.
It seemed to me that the conversation went well, that the journalist was taking my concerns seriously, and respected my request not to serve as balance in yet another piece promoting the inevitability of an AI transforming mathematics. And I have to admit that Quanta kept their side of the bargain, but not at all in the way I had anticipated.
About two months after the interview a fact-checker contacted me: Jordana Cepelewicz’s article was about to appear, and he wanted to confirm a quotation that was going to run in the article. The quotation went something like this:
Michael Harris, of Columbia University, thinks that AI might transform combinatorics, the mathematical study of counting.
Do you think I actually said that? It was certainly not the point I was trying to get across during my 60+ minutes on Zoom with the journalist, nor is it the sort of idea that I would spontaneously entertain under any circumstances, not least because I rarely have any reason to think about the future of combinatorics at all.
My best guess is that, toward the end of our conversation, Cepelewicz asked me to speculate on how AI might make a difference in mathematics, and — because I felt that my opinion had been heard, and because I am the sort of person who generally wants to be helpful — I grudgingly mentioned that I had seen evidence that specialists in combinatorics were making productive use of AI—but that Quanta should confirm this with specialists, since it’s an area in which I am by no means an expert.
In this way I was transformed from token skeptic to one more promoter of the AI narrative! This was undoubtedly the work of Quanta’s editors — I certainly don’t blame the journalist — and it’s consistent with practically all the media accounts I have read about AI and mathematics. At my insistence the hapless fact-checker agreed to convey my objections to Quanta and the quotation was deleted from the published article. Or more precisely, the quotation remained, but my name, as you can check for yourselves, was replaced by “Some mathematicians.”
Not even a token skeptic
When the article appeared I easily confirmed that the editors did include a quotation that misrepresented the opinion of at least one additional person, namely anthropologist Rodrigo Ochigama. The option of indifference, inclusion of which was the precondition for my agreeing to be interviewed, was not represented at all. But the option had been foreclosed in advance by two energetic quotations from Andrew Granville, placed prominently near the beginning of the article. Granville is quoted as saying
The cat’s out of the bag.
and
[most mathematicians] have their heads buried firmly in the sand.
With the question framed in this way, the editors were discouraging dissent, thus protecting a hypothetical dissenter’s dignity; to admit indifference to the technology would have been tantamount to inviting Quanta’s photographers to the sandbox to film them immersing their heads in the sand.1
I don’t believe that Granville’s two quotations adequately represent his thoughts on the subject. You will look in vain for cats, bags, and sand in his careful article in the AMS Bulletin. And even the Quanta article quoted him rejecting the model of a future of “outsourcing more rigorous aspects of mathematics to AI”:
“I feel that my own understanding is not from the bigger picture,” he said. “It’s from getting your hands dirty.”
I have my own ideas about the importance of keeping my hands clean, or dirty, in my own search for understanding, but I will keep them to myself. This is a question best addressed by anthropologists who are attentive to the ritual aspects of our profession, its fear of — or revelling in — symbolic pollution, its “[p]lugging and chugging,” its “crafting and pounding,” its “nitty-gritty,” and its “boring and rote parts.” Philosophers of mathematics would also do well to pay attention to these matters. If (philosopher) Justin Clarke-Doane and I learned anything during the semester we jointly taught a course entitled “Mathematics and the Humanities” but subtitled “Mathematics and Philosophy,” it’s that attempts to isolate conceptually the core or the essence or the creative parts of mathematics from those aspects that appear better suited for machines that plug and chug quickly run up against the kinds of philosophical conundrums that are as intractable now as they were for Plato and Aristotle.
A different ritualized aversion to pollution was on display in the whole Quanta “Special Feature,” devoted to Science, Promise and Peril in the Age of AI, in which the Cepelewicz article appeared. It was expressed in the (ritual) avoidance of any reference to the fact that AI emerges not by spontaneous generation but rather as the result of specific decisions, by easily identifiable individuals and corporations, that have an outsized influence on the political decisions that guide our lives. The only reference2 I found in the “Special Feature” to the tech industry was the mention that transformers were “invented by Google,” a synecdoche that not only inadvertently illustrates the practice by which a multi-trillion dollar corporation can take credit for the work of its engineers3 but is also the last of 19 entries in a “simple primer” that manages to omit any direct reference to the fact that artificial intelligence has something to do with machines. Even GPU doesn’t figure among the 19 “most essential terms in AI,” nor (as far as I can tell) anywhere else in the Special Feature.
But there is an indirect allusion to AI’s rootedness in the material word in a different entry on the same page:
Matrix Multiplication / The basic arithmetic that powers modern AI — and a major source of its massive energy demands. Most of the computations happening within neural networks involve huge tables of numbers, known as matrices, being multiplied together countless times. Researchers have been trying to optimize this process for decades, including using AI to do so. Removing it altogether could radically improve AI’s energy efficiency.

Another article in the Special Feature raises the same issue but stops short of asking whether the trendline showing that “the exponential growth of AI could consume nearly all the world’s energy production by 2050”4 might have something to do with the “Peril” in the Special Feature’s title:
Despite the nuances, the energy use of artificial neural networks concerns many people. The current trend toward scaling — making the networks ever bigger to increase their computational power — will continue to accelerate their energy demand until more efficient chips and processes are invented. Adding too much biological detail into algorithms comes with a significant cost in computing power, energy and other resources. “We need to identify a sweet spot where we think that this level of detail is actually useful,” Ramaswamy said.
Quanta lists no mathematicians among the “many people” with “concerns,” about energy use or about anything else beyond the hypothetical effects on the narrow community of mathematical researchers. Had you read the article by Michael Moyer that concluded the Special Feature, entitled
Where Do Scientists Think This Is All Going?
and had you done so within a day or so of its publication, you would have seen my name in one of the bubbles, under a quotation that said, approximately,
AI poses a danger to the autonomy of mathematics.
This is something I actually said; but taken out of context, the quotation suggests that the autonomy of mathematicians would be superseded by the autonomy of machines. That is not what I meant at all! As I wrote to Jordana Cepelewicz immediately after I saw that bubble:
The quotation was taken wildly out of context. I remind you that the context was the misalignment of the priorities of the tech industry with those of mathematicians. So I am asking you to remove this quotation.
Once again, Quanta graciously acceded to my request. My perspective was not represented anywhere in the Special Feature, but at least it were not misrepresented. But that also means that the opinions of the colleagues who thanked me in Seattle for this newsletter, and others who have thanked me since that time, are not represented in Quanta. In fact, Quanta has never found space for these opinions although, if my impressions are correct, they are widely shared among mathematicians.
So what can be done? If you agree with me that Quanta’s coverage of AI in mathematics has been one-sided — especially, but not solely, if you are concerned about the concentration of power in the tech industry and what this implies for mathematics — you can act on the advice I have taken to giving colleagues who thank me: you can speak up. Quanta leaves space for comments at the bottom of its articles. There is already a handful of comments on Jordana Cepelewicz’s article. David Bevan’s comment includes this sentence:
It doesn't reflect well on Quanta when exaggerated claims are repeated uncritically.
Bevan is referring to claims about DeepMind’s silver medal-level performance at the International Mathematical Olympiad; but it could serve as a summary of Quanta’s consistent approach to the implications of AI for mathematics. Do you find this approach problematic? Then tell Quanta what you think!
For those who find this image puzzling, I have already posted an AI-generated picture of such mathematicians.
Maybe one of two references. IBM Research is mentioned in the article on “The strange physics that gave birth to AI” in which, to be fair, some attention is given to the fact that the existence of AI is dependent on discoveries in condensed matter physics.
If you click on “Invented by Google” on the page where it is mentioned in Quanta you are taken to the original 2017 arXiv page of the article “Attention is all you need,” where you can read the names of the eight human beings who actually invented transformers. To quote David Noble’s America by Design, “the history of modern technology in America is of a piece with that of the rise of corporate capitalism.”
By clicking on “massive energy demands” in the above quotation, I found this link to the article with the ominous title “AI goes nuclear” in Bulletin of the Atomic Scientists. So a vigilant reader of the Special Feature may actually find reasons to be “concerned” about the unchecked development of this technology. Optimism about technology appears to be compulsory in Quanta, however; note that the passages quoted above end by anticipating “radical improvement” in an imagined future where “more efficient chips and processes are invented.”
In less than 4 years we will see Fields Medal quality work from completely autonomous models. Quanta is just trying to help prepare everyone for the reality that at each cognitive task they will have a brilliant AI assistant.
ps: I see you quote David Nobel saying “the history of modern technology in America is of a piece with that of the rise of corporate capitalism.” He was a good friend back when he worked at Pomona. I suppose you know he got fired from MIT and effectively banned from US universities for pushing this bolshie line. Thats why he ended up in Canada - no US university would employ him. I assume you have tenure MH so can't be threatened with that fate.