According to the psychologist Peter Gray, children today are more depressed than they were during the Great Depression and more anxious than they were at the height of the Cold War. A 2019 study published in the Journal of Abnormal Psychology found that between 2009 and 2017, rates of depression rose by more than 60 percent among those ages 14 to 17, and 47 percent among those ages 12 to 13. This isn’t just a matter of increased diagnoses. The number of children and teenagers who were seen in emergency rooms with suicidal thoughts or having attempted suicide doubled between 2007 and 2015.
To put it simply, our kids are not O.K.
For a long time, as a mother and as a writer, I searched for a single culprit. Was it the screens? The food? The lack of fresh air and free time, the rise of the overscheduled, overprotected child, the overarching culture of anxiety and fear?
Those things might all contribute. But I’ve come to believe that the problems with children’s mental and emotional health are caused not by any single change in kids’ environment but by a fundamental shift in the way we view children and child-rearing, and the way this shift has transformed our schools, our neighborhoods and our relationships to one another and our communities.
The work of raising children, once seen as socially necessary labor benefiting the common good, is an isolated endeavor for all but the most well-off parents. Parents are entirely on their own when it comes to their offspring’s well-being. Many have had to prioritize physical safety and adult supervision over healthy emotional and social development.
No longer able to rely on communal structures for child care or allow children time alone, parents who need to work are forced to warehouse their youngsters for long stretches of time. School days are longer and more regimented. Kindergarten, which used to be focused on play, is now an academic training ground for the first grade. Young children are assigned homework even though numerous studies have found it harmful. STEM, standardized testing and active-shooter drills have largely replaced recess, leisurely lunches, art and music.
The role of school stress in mental distress is backed up by data on the timing of child suicide. “The suicide rate for children is twice what it is for children during months when school is in session than when it’s not in session,” according to Dr. Gray. “That’s true for suicide completion, suicide attempts and suicidal ideation, whereas for adults, it’s higher in the summer.” But the problems with kids’ mental and emotional health are not only caused by what goes on in the classroom. They also reflect what’s happening in our communities. The scarcity of resources of every kind, including but not limited to access to mental health services, health care, affordable housing and higher education, means that many parents are working longer and harder than ever. At the same time that more is demanded of parents, childhood free time and self-directed activities have become taboo.
And so for many children, when the school day is over, it hardly matters; the hours outside school are more like school than ever. Children spend afternoons, weekends and summers in aftercare and camps while their parents work. The areas where children once congregated for unstructured, unsupervised play are now often off limits. And so those who can afford it drive their children from one structured activity to another. Those who can’t keep them inside. Free play and childhood independence have become relics, insurance risks, at times criminal offenses.
Tali Raviv, the associate director of the Center for Childhood Resilience, says many children today are suffering a social-skills deficit. She told me kids today “have fewer opportunities to practice social-emotional skills, whether it’s because they live in a violent community where they can’t go outside, or whether it’s because there’s overprotection of kids and they don’t get the independence to walk down to the corner store.” They don’t learn “how to start a friendship, how to start a relationship, what to do when someone’s bothering you, how to solve a problem.”
Many parents and pediatricians speculate about the role that screen time and social media might play in this social deficit. But it’s important to acknowledge that simply taking away or limiting screens is not enough. Children turn to screens because opportunities for real-life human interaction have vanished; the public places and spaces where kids used to learn to be people have been decimated or deemed too dangerous for those under 18.
And so for many Americans, the nuclear family has become a lonely institution — and childhood, one long unpaid internship meant to secure a spot in a dwindling middle class.
Something has to change, says Denise Pope, a co-founder of Challenge Success, an organization based in Palo Alto, Calif., that helps schools make research-backed changes to improve children’s mental health. Kids need recess. They need longer lunches. They need free play, family time, meal time. They need less homework, fewer tests, a greater emphasis on social-emotional learning.
Challenge Success also works with parents, encouraging them to get together with their neighbors and organize things like extracurricular-free days when kids can simply play, and teaching them how not to intervene in normal peer conflict so that children can build problem-solving skills themselves. A similar organization, Let Grow, helps schools set up unstructured free play before and after the school day.
Dr. Gray told me it’s no surprise that the program, which he consults for, has been well received. “Children are willing to get up an hour early to have free play, one hour a week,” he said. “It’s like a drop of water if you’ve been in the desert.”
These groups are doing important work, but if that kind of desperation is any indication, we shouldn’t be surprised that so many kids are so unhappy. Investing in a segment of the population means finding a way to make them both safe and free. When it comes to kids, we too often fall short. It’s no wonder so many are succumbing to despair. In many ways, America has given up on childhood, and on children.
Kim Brooks is the author of “Small Animals: Parenthood in the Age of Fear.”
Ok that was the nutshell version. If that answers your question, that's great.
The more detailed answer is "No, I won't mentor you,but in this blog entry I will tell you what to do instead, to get where you want to go". And I can reply with the url to this post the next time someone requests mentoring.
I once wrote a comment on Hacker News about what *I* learned about ending up with awesome mentors. Here it is, slightly edited so it reads a little better.
(The OP asked) Recently I have tried approaching a few good developers through their blogs about various matters including advice on how to go about some projects I'm undertaking but I am surprised at the unfriendly responses I have received. Maybe I have been going about it the wrong way but it got me thinking; Shouldn't the guys whose work we look up to be keen on what some of us young aspiring developers have to contribute to the community? I mean sure, we don't have the experience or skills some of these guys have(yet) but we still have some ideas that are viable with the right technical skills to back them. If any of them want to reach out and help nurture some potential talent, it may very well benefit all them in the end, whether financially or in terms of new ideas and experiences.
I commented thus
I have some experience in this, so let me try to explain a couple of things that I learned in the "school of hard knocks".
Once upon a time I was in a situation where I thought I could contribute to something one of the best programmers in the world was working on so I sent an email (I got the address from his webpage) and said something to the effect of " you say on this webpage you need this code and I have been working on something similair in my spare time and I could write the rest for you over the next few months because I am interested in what you are doing" and I got a 2 line reply which said (paraphrased) " A lot of people write to me saying they'll do this , but I've never seen any code yet so I am a little skeptical. Don't take it personally. Thanks. bye.".
So in the next email (sent a minute after I received his reply) I sent him a zipped file of code with an explanation that "this is what I've done so far which is about 70% of what you want" and he immediately replied saying "Whoa you are serious. That is refreshing .. ' and opened up completely, giving me a lot of useful feedback and very specific advice. He is a (very valued) mentor to this day.
Another time, I was reading a paper from a (very famous) professor at Stanford, and I thought I could fill in some gaps in that paper so I wrote a "You know your paper on X could be expanded to give results Y and Z. I could use the resulting code in my present project. Would you be interested in seeing the expanded results or code" email and I got a very dismissive one line email along he lines of " That is an old paper and incomplete in certain respects, Thanks".
So a few days later, I sent along a detailed algorithm that expanded his idea, with a formal proof of correctness and a code implementation and he suddenly switched to a more expansive mode, sending friendly emails with long and detailed corrections and ideas for me to explore.
Now I am not in the league of the above two gentlemen, but perhaps because I work in AI and Robotics in India,which isn't too common, I receive frequent emails to the effect of "please mentor me", often from students. I receive too many of these emails to answer any in any detail, but if I ever get an email with "I am interested in AI/ Robotics. This is what I've done so far. Here is the code. I am stuck at point X. I tried A, B, C nothing worked. What you wrote at [url] suggests you may be the right person to ask. can you help?" I would pay much more attention than to a "please mentor me" email.
In other words, when you asks for a busy person's time for "mentorship" or "advice" or whatever, show (a) you are serious and have gone as far as you can by yourself (b) have taken concrete steps to address whatever your needs are and (optionally. but especially with code related efforts)(c) how helping you could benefit them/their project.
Good developers are very busy and have so much stuff happening in their lives and more work than they could ever hope to complete that they really don't have any time to answer vague emails from some one they've never heard of before.
As an (exaggerated) analogy, think of writing an email to a famous director or movie star or rock star, saying "I have these cool ideas about directing/acting/ music. Can you mentor me/give me advice?"
I am replacing the words "app" and "technical" in your sentence below with "film" and "film making".
"if I have an idea for a film that I want to develop, but my film making skills limit me, it would be nice to have people to bounce the idea off and have it implemented. "(so .. please mentor me/give me advice/make this film for me).
Do you think a top grade director (say Spielberg) would respond to this?
The fact that you at least got a 2 line response shows that the developers you wrote to are much nicer than you may think. They care enough not to completely dismiss your email, though they receive dozens of similar emails a week.
As someone else advised you on this thread, just roll up your sleeves and get to work. If your work is good enough, you'll get all the "mentoring" you'll need. "Mentoring" from the best people in your field is a very rare and precious resource and like anything else in life that is precious, should be earned.
My 2 cents. Fwiw. YMMV.
That says most of what I want to say.
Some minor points now, addressing some points raised in the latest emails.
If you claim to be "very passionate about X" but have never done anything concrete in X I find it difficult to take you seriously. People who are really passionate about anything don't wait for "leaders" or "mentors" before doing *concrete* work in the area of their passion, however limited. Specifically wrt to programming/machine learning etc in the days of the internet and with sites like Amazon or the MIT OCW you have no limits except those you impose on yourself.
I hate to sound all zen master-ey but in my experience, it is doing the work that teaches you what you need to do next. Walking the path reveals more of the map. All the mentoring a truly devoted student needs is an occasional nudge here or an occasional brief warning there. Working with uncertainty is part of the learning. Waiting for mentorship/leadership/"community"/ whatever to start working is a flaw that guarantees you will never achieve anything worthwhile.
Ok pseudo-zen-master-mode off. More prosaic version - "shut up and code". Or make a movie on your webcam, Or write that novel. Whatever. Your *work* will, in time, bring you all the mentoring and community or whatever else you need.
As always My 2 cents. Fwiw. YMMV. Have a nice day.
 For some reason Bangalore is crawling with people who first want to form a community and then start learning/working/whatever. These efforts almost invariably peter out uselessly. First do the work. Then if you feel like "communing" talk to others who are also working hard.Please read this , sent to me by my friend Prakash Swaminathan.
So begins the tale of a human who literally stuck a giant television on his or her head so they could bring joy to the world in the form of unwanted TV sets. And just as it did for the person who brought it to our attention, it has totally made my day.
We’re talking big, heavy old-school CRTs. More than 50 of them, according to the report. Left right on people’s porches. Just get a load of that swagger as he or she strides up on camera:
And how do the homeowners repay this generosity? By letting our local news camera operator get a delightful shot of the cops hauling a truck full of old TVs...
In addition to the numerous pioneering works of science fiction by which he made his name, H. G. Wells also published a steady stream of non-fiction meditations, mainly focused on themes salient to his stories: the effects of technology, human folly, and the idea of progress. As Peter J. Bowler explores, for Wells the notion of a better future was riddled with complexities.
H. G. Wells worried constantly about the future of humanity. While he hoped for progress in human affairs, he was only too well aware that it was not inevitable and might not be sustained. Throughout his career he celebrated the technological developments that were revolutionizing life but feared they might lead to eventual degeneration or, as came to pass in 1914, a catastrophic war. He was also aware that there were disagreements over what would actually count as progress. Providing everyone with the benefits of modern industry might not be enough, especially as continued technological innovation would require the constant remodeling of society. Progressive steps introducing entirely new functions were episodic, open-ended and unpredictable, in both biological and social evolution. These uncertainties were compounded by a realization that, where technological innovation was concerned, it was virtually impossible to predict future inventions or what their long-term consequences might be. Even if progress continued, it would be much more open-ended than advocates of the traditional idea of progress had imagined.1
For Wells the most basic level of uncertainty arose from the fear that the human race might not sustain its current rate of development. In his 1895 story “The Time Machine” he imagined his time traveler projected through eras of future progress: “I saw great and splendid architecture rising about me, more massive than any buildings of our time, and yet, as it seemed, built of glimmer and mist.”2 But the time traveler ends up in a world brought down by social division and degeneration. The brutal Morlocks are the descendants of the industrial workers, while the childlike Eloi are the remnants of the leisured upper classes. This prediction was based on his zoologist friend E. Ray Lankester’s extension of the Darwinian theory. Lankester argued that because evolution works by adapting populations to their environment, progress is not inevitable and any species that adapts itself to a less active and hence less challenging way of life will degenerate.3 Here was the model for a more complex vision of progress in which any advance would depend on the circumstances of the time and could not be predicted on the basis of previous trends.
The Darwinian viewpoint is more clearly visible in Wells’ hugely successful non-fiction work The Outline of History, originally published in fortnightly parts in 1920. The survey starts from the development of life on earth and the evolution of the human species. Progress had certainly happened both in evolution and in human history from the Stone Age onward, but Wells shows that there was no predetermined upward trend. His exposure to the Darwinian vision of biological evolution (which continued in his collaboration with Julian Huxley to produce The Science of Life some years later) showed him that there were multiple ways of achieving a more complex biological structure — or a more complex society. Truly progressive steps in both areas were sporadic, unpredictable, and open-ended. When progress did occur in human society, Wells was certain that the driving force was rational thinking, science, and technological innovation. Yet history showed how all too often the benefits of creativity had been undermined by conservatism and social tensions, culminating in the disaster of the Great War.4
Wells was elaborating a new and less deterministic version of the idea of progress. Nineteenth-century society’s faith in the inevitability of progress had been misplaced, not just because it underestimated obstacles, but because it had assumed an oversimplified model of how development must take place. Whatever their differing views of the goal to be achieved, the thinkers of the previous generation — including the Marxists, who Wells admired to some extent — had all visualized history as the ascent of a ladder of developmental stages leading to a final utopia. Darwinism showed that the history of life was best represented by a branching tree, not a ladder, and Wells now saw that human history too led to many differing forms of complex society. And, just as the great “breakthroughs” in animal evolution had often come from insignificant beginnings, the most important advances in human history were not best characterized as continuations of previous trends. Wells takes the modern synthesis of science and technology, which he sees as emerging primarily in Europe, as a case in point. For most of its history, Europe had not been at the forefront of progress, yet its development of modern science and industry had catapulted it into world dominance. Wells openly compared this to the evolution of the originally insignificant mammals during the age of the dinosaurs.5
This modern breakthrough had been achieved in only one branch of the divergent tree of cultural evolution, a branch that had not been in the mainstream and was by no means the most advanced at the time. Wells was not the only thinker at this time to argue that the emergence of science in Europe could not have been predicted on the basis of previous historical trends. Alfred North Whitehead made the same point, suggesting that without this unlikely breakthrough, humanity might have remained stagnant for untold ages to come.6 Whitehead saw the rise of modern science as a philosophical development that did not become associated with technological invention until the nineteenth century. Wells argued that the underlying cause of Europe’s rise to world dominance was its isolated geographical position, which had encouraged the age of maritime exploration. Unlike the great empires of the past, Europe faced the unusual challenge of a geography dictated not by the land but by the sea — it faced outward to the Atlantic and beyond. The result was a culture that eventually promoted not just an industrial revolution but what Wells called a “mechanical revolution” — especially the invention of new sources of power including steam and electricity. For Wells it heralded “a new thing in human experience … such a change in human life as to constitute a fresh phase of history.”7
This kind of development, however, brought problems with it. The outburst of scientific and technological innovation was taking place in a society that had still not transcended the limitations of traditional culture and politics. Technology was misused for military purposes, and the Great War illustrated its potentially catastrophic consequences. In the years before war broke out, Wells had been one of the first to realize that new technologies such as aviation would make future conflicts even more devastating. This was the theme of his novel The War in the Air of 1908, while The World Set Free of 1914 predicted not only a new source of power derived from the latest discoveries of atomic physics, but also an atomic bomb. In the post-war era Wells was one of many worrying that the next war might destroy civilization altogether. His futuristic 1933 novel The Shape of Things to Come described the outbreak of a war that reduces most of the world to savagery. Yet Wells retained the hope that a small coterie of technocrats led by the aviation experts would survive and ultimately recreate society along more rational lines, ushering in the age of true progress. Humanity finally escapes the shackles imposed by the old cultural values.8
As to what form the hoped-for future society would take, Wells had very definite plans. The book promoted his long-standing campaign for a rationally ordered World State that would ensure the fruits of technological innovation were fairly distributed. He was no democrat, however, and saw this being driven by the activities of an elite group, the “scientific samurai”, who appear as the aviators who transform the world in The Shape of Things to Come. He appreciated that simply giving everyone material plenty might not be enough to satisfy their emotional needs. At first he seems to have thought along quasi-religious lines, imagining humanity achieving an almost spiritual unity. But in his screenplay for the Alexander Korda film Things to Come, based loosely on the book, he adds a concluding episode in which the prospect of spreading a transformed human race out into the cosmos by space travel offers a materialistic equivalent of religion, something that will give our lives an ultimate purpose. Even here, though, there is a threat that conservative thinkers will not approve of this disturbance to their predictable lives. In the final scenes a mob tries to destroy the giant gun that is about to fire the young cosmonauts into space (here Wells pays homage to Jules Verne). The leader of the “samurai” gestures to the heavens and offers us a choice: “All the universe or nothing … Which shall it be?” and the scene fades out to the caption “WHITHER MANKIND?”9
The suggestion that the World State will want to expand its activities into space points to another important component of Wells’ new vision of progress. He realized that once technological innovation becomes the driving force, there can be no static future utopia as previous manifestations of the idea of progress had imagined. Invention would continue and society would have to keep adjusting in response. Now that the genie of science-driven technology was out of the bottle, Wells became acutely aware that it would be hard to predict future inventions, and hard to foresee the consequences of those that succeeded. A truly rational society would need to take this into account and plan accordingly.
The element of unpredictability had become obvious in the early years of the century. Wells realised that the military applications of aviation might checkmate the hopes of optimists that rapid global transport would encourage world unity. This was in The War in the Air, which somewhat curiously opens with the depiction of a world in which surface transport has already been transformed by the gyroscopic monorail invented by Louis Brennan. The monorail can cross chasms and seas on a single cable. Wells predicted its success, but in the real world the invention, although tested, never came into use.
Wells was also confronted with the difficulty of predicting the effects of new technologies in other areas. In another novel, The Sleeper Awakes, he drew on the American experience with skyscrapers to imply that soon we would all be living in giant mega-cities roofed over against the elements.10 But only a year later his Anticipations, a more serious effort at prediction, suggested that the invention of the electric train and the automobile would allow for “The Diffusion of the Great Cities” as the population moves out into suburbs. For Wells, we cannot predict the new technologies that will emerge from scientific discoveries, and from the ever-widening plethora of new inventions we cannot be sure which will actually be successful in the marketplace. Rival technologies will pull society in different directions, and it is hard to be sure which will triumph in the industrial struggle for existence. The speed of change is also hard to predict. In a later edition of Anticipations, Wells confessed that his original suggestion that aviation would not become commonplace until 1950 had turned out to be hopelessly pessimistic.11
Even when a new technology starts to catch on, it may be hard to imagine what the consequences of its success will be. In a 1932 radio talk, Wells used the example of the growing chaos on the roads to point out how difficult it had been to foresee the consequences of making motor cars available to a wider public when they were first introduced. It was now obvious that the road network would have to be redesigned to cope with the increased traffic. He called for the universities to have “Professors of Foresight” to grapple with the unintended consequences posed by future inventions.12
The historian Philip Blom calls the early twentieth century the “vertigo years”, when everyday life was transformed by a bewildering array of new technologies.13 Wells realized that this state of uncertainty would continue indefinitely, making it virtually impossible even for the enthusiasts to predict what would emerge. The technophiles hail their innovations as the driving force of progress, but they do not always foresee what will be invented — or what the ultimate effects on society will be. This is a situation we are acutely aware of today: few, if any, could have anticipated the impact of computers and the digital revolution, and we are only gradually becoming aware that these innovations have not brought us unalloyed benefits. The range of technologies that have turned out to have harmful side effects is now legion, a situation that Wells himself anticipated.
Peter J. Bowler is Professor emeritus of the History of Science at Queen’s University, Belfast. In addition to a number of books on the history of biology — including Fossils and Progress (Science History Publications, 1976) and The Eclipse of Darwinism (Johns Hopkins University Press, 1983) — most recently he has published A History of the Future: Prophets of Progress from H. G. Wells to Isaac Asimov (Cambridge University Press, 2017).
1. There are many studies of Wells’ life and work; those that deal especially with his predictions include Rosslyn D. Haynes, H. G. Wells: Discoverer of the Future (London: Macmillan, 1980); John Huntingdon, The Logic of Fantasy: H. G. Wells on Science Fiction (New York: Columbia University Press, 1982); and Patrick Parrinder, Shadows of the Future: H. G. Wells, Science Fiction and Prophecy (Liverpool: Liverpool University Press, 1995). More generally, see Peter J. Bowler, A History of the Future: Prophets of Progress from H. G. Wells to Isaac Asimov (Cambridge: Cambridge University Press, 2017).↩ 2. H. G. Wells, “The Time Machine” in The Short Stories of H. G. Wells (London: Benn, 1926), 27.↩ 3. E. Ray Lankester, Degeneration: A Chapter in Darwinism (London: Macmillan, 1880). On Lankester’s influence on Wells, see Joe Lester, E. Ray Lankester and the Making of Modern British Biology, ed. Peter J. Bowler (Faringdon: British Society for the History of Science monographs, 1995), 178 and 198–202.↩ 4. H. G. Wells, “The Making of our World”, in The Outline of History: Being a Plain History of Life and Mankind (London: Newnes, 1920), Vol. 1.. See also H. G. Wells, Julian S. Huxley, and G. P. Wells, The Science of Life (London: Cassell, 1938). On changes in the concept of biological progress at the time, see Peter J. Bowler, Life’s Splendid Drama: Evolutionary Biology and the Reconstruction of Life’s Ancestry, 1860–1940 (Chicago: University of Chicago Press, 1996), especially chapters 7 and 9.↩ 5. Wells, The Outline of History, 2:492.↩ 6. Alfred North Whitehead, Science and the Modern World (Cambridge: Cambridge University Press, 1926), 136.↩ 7. Wells, The Outline of History, 2:643–4.↩ 8. H. G. Wells, The War in the Air (London: G. Bell, 1908); The World Set Free: A Story of Mankind (London: Macmillan, 1914); The Shape of Things to Come: The Ultimate Revolution (London: Hutchinson, 1933). The World Set Free is dedicated to the physicist Frederick Soddy.↩ 9. H. G. Wells, Things to Come: A Critical Text of the 1936 London First Edition, ed. Leon Stover (Jefferson, N.C.: McFarland, 2007), 197–204. See also Leon Stover, The Prophetic Soul: A Reading of H. G. Wells’ Things to Come Together with the Film Treatment Whither Mankind and the Post-Production Script (Jefferson, N.C.: McFarland, 1987).↩ 10. This was originally published as When the Sleeper Awakes in 1899; see H. G. Wells, The Sleeper Awakes, ed. Patrick Parrinder (London: Penguin Classics, 2005). The same scenario forms the backdrop to “A Story of the Days to Come”, in The Short Stories of H. G. Wells (London: Benn, 1926), 796–897.↩ 11. H. G. Wells, Anticipations of the Reaction of Mechanical and Scientific Progress upon Human Life and Thought: new edition with the author’s specially written introduction (London: Chapman and Hall, 1914). This was originally published in 1900. “The Diffusion of the Great Cities” is the title of chapter 2; for the prediction about air travel see page 191, and on its timidity see the introduction page ix.↩ 12. H. G. Wells, “Wanted: Professors of Foresight”, The Listener 8 (November 23, 1932): 729–30 (broadcast November 19, 1932).↩ 13. Philip Blom, The Vertigo Years: Change and Culture in the West, 1900–1914 (London: Weidenfeld and Nicolson, 2008).↩
TWO YEARS ago British chocoholics felt the pinch from the decision to leave the European Union. As sterling tumbled, global firms selling to the British market faced the same production costs as before, but got less money for each sweet sold. Rather than raise the price per chocolate, some chose to shrink the chocolate per price. The famous peaks on a bar of Toblerone grew conspicuously less numerous (though Mondelez, the bar’s maker, said Brexit was not the cause). Other products suffered the same “shrinkflation”: toilet rolls and toothpaste tubes became smaller. The threat of Brexit made the phenomenon more visible, but it is surprisingly common. Statisticians and policymakers need to take note.
Every first-year economics student quickly becomes familiar with charts of supply and demand, which place price on one axis and quantity on the other. Given a drop in demand, the charts show, firms can either sell fewer items at the prevailing price or cut prices to prop up sales. But online retailing, which makes it easier to collect fine-grained price data, reveals how poorly textbook models reflect real-world market dynamics. The prices of consumer goods, it turns out, behave oddly.
A forthcoming paper by Diego Aparicio and Roberto Rigobon of the Massachusetts Institute of Technology helps make the point. Firms that sell thousands of different items do not offer them at thousands of different prices, but rather slot them into a dozen or two price points. Visit the website for H&M, a fashion retailer, and you will find a staggering array of items for £9.99: hats, scarves, jewellery, belts, bags, herringbone braces, satin neckties, patterned shirts for dogs and much more. Another vast collection of items cost £6.99, and another, £12.99. When sellers change an item’s price, they tend not to nudge it a little, but rather to re-slot it into one of the pre-existing price categories. The authors dub this phenomenon “quantum pricing” (quantum mechanics grew from the observation that the properties of subatomic particles do not vary along a continuum, but rather fall into discrete states).
Just as surprising as the quantum way in which prices adjust is how rarely they move at all. Retailers, Messrs Aparicio and Rigobon suggest, seem to design products to fit their preferred price points. Given a big enough shift in market conditions, such as an increase in labour costs, firms often redesign a product to fit the price rather than tweak the price. They may make a production process less labour-intensive—or shave a bit off a chocolate bar.
Central banks are starting to see the consequences. Inflation does not respond to economic conditions as much as it used to. (To take one example, deflation during the Great Recession was surprisingly mild and short-lived, and after nearly three years of unemployment below 5%, American inflation still trundles along below the Federal Reserve’s target rate of 2%.) In its recently published annual report the Bank for International Settlements, a club of central banks, mused that quantum pricing and related phenomena help account for such trends.
But firms’ aversion to increasing prices may be as much a consequence of limp inflation as a contributor to it. When the price of everything rises a lot year after year, as in the 1970s and 1980s, firms can easily adjust the real, inflation-adjusted cost of their wares without putting off shoppers. A 5.5% jump in the cost of a pint after years of 5% increases does not send beer drinkers searching for other pubs in the way that a 0.5% hike after years of no change might. Thus falling inflation can make prices “stickier”. To compensate, firms instead find other ways to impose costs on buyers—such as making products smaller or lower-quality.
Labour markets are affected, too. Wages are notoriously sticky, especially downwards. In a world of low inflation, the ability to trim pay by raising wages less than inflation is lost to firms, with serious macroeconomic consequences. Economists blame sticky wages for causing unemployment during recessions. Facing reduced demand, firms that cannot cut pay to maintain margins while slashing prices instead reduce output—and sack workers.
But nimble firms have other options: the employment version of shaving a bit of chocolate from the bar. Some cut costs by boosting output per worker, often by driving workers harder. Tellingly, growth in output per worker now tends to fall in booms and rise during busts, precisely the opposite of the pattern 40 years ago, when inflation was high. Firms can respond to market pressures by reducing the benefits available to workers; Asda, a supermarket, recently announced plans to slash British workers’ holiday allowances. Or they can offer workers more tortuous schedules. Research published in 2017 suggests that being able to vary workers’ hours from week to week is worth at least 20% of their wages. On the flipside, during good times firms often opt to reward workers with office perks and one-off bonuses, rather than pay rises that cannot easily be clawed back during downturns.
The uncertainty principle If it happens on a sufficiently large scale, the practice of tweaking quality in lieu of price could play havoc with essential economic data. Statistical agencies do their best to account for changing product quality, but if adjustments are unexpectedly common or subtle then muted inflation figures could easily be concealing a more turbulent economic picture. Central banks watching for big swings in inflation or wage growth as a sign of trouble could be reacting to figures that bear far less relation to business conditions than they used to.
What’s more, the substitution of quality for price as firms’ main way of responding to changing market conditions weakens the case for keeping inflation low and stable. Inflation makes relative prices less informative, economists reckon, making it harder to decide what to buy and how to spend. Rather than clarity, low inflation has brought a different sort of confusion: one of shrinking chocolate bars and lost holidays.