508 stories
·
2 followers

The Computer Genius the Communists Couldn't Stand (2017)

1 Comment

A million per second

An advertising leaflet for the K-202 computer, photo: promo materials
An advertising leaflet for the K-202 computer, photo: promo materials

The K-202 computer made its debut in 1971 at the Poznań International Fair. It was small enough to fit into a briefcase, and could conduct a million operations per second – many more than the PCs that conquered the world a decade later. In addition, this revolutionary Polish computer cost around $5,000 – not at all pricey given its unique features. On the contrary, it was much cheaper than its main Polish competitor, the Odra computer – a slower and much bigger device, like the many other computers around the globe at the time, around the size of a cupboard.

Despite all of this, two years later, the ingenious constructor of the K-202, Polish inventor Jacek Karpiński, was escorted out of his factory by guards armed with rifles and all of the K-202s in production were thrown out. On top of that, the authorities of the communist regime banned him from creating any other devices.

Why would such a fate befall the would-be Polish Bill Gates or Steve Jobs (as he’s dubbed in today’s Poland)? Why on earth would any country deprive itself of potentially becoming the leader in a vital and cutting-edge field of technology? This article looks into these very questions and looks a bit closer at the man behind the machine: Jacek Karpińśki.

On the roof of Europe

The first days of August 1944, photo: Stefan Bałuk "Kubuś"
The first days of August 1944, photo: Stefan ‘Kubuś’ Bałuk

Jacek Karpiński was to be born on the roof of Europe, at least that’s what his parents had planned for him. In a 2009 film about him, made toward the end of his life by Polish Television, he says the following:

I was to be born on Mont Blanc, there’s a little hut over there, called maybe Courmayeur. Just below the summit, that’s where I was supposed to come into this world – a completely crazy idea.

Both of his folks were mountaineers, hence this ‘crazy idea’, which they eventually dropped due to the fact that the hut was somewhat Spartan. Jacek was eventually delivered in Torino on 9th April 1927. Nevertheless, this anecdote illustrates the ‘sky-is-the-limit’ mind-set that ran in the Karpiński family and in time would become one of Jacek’s personality traits.

His father, Adam, who was killed by an avalanche while trekking in the Himalayas in 1939, was an aircraft designer. It was he that proposed the construction of a low-wing plane years before that kind of aircraft was first introduced – unfortunately the idea wasn’t approved by his bosses. Jacek’s mother, on the other hand, was a professor who specialised in rehabilitation. She was decorated with one of Poland’s most important awards, the Virtuti Militari, for serving as a liaison officer during the Polish-Bolshevik war.

Jacek went to war himself as well. When he was only 14 years old (claiming he was older), he joined the Polish resistance in World War II. He participated in reconnaissance missions and served alongside Krzysztof Kamil Baczyński, the noted Polish poet who perished in the Warsaw Uprising. Karpiński, who also fought in the Uprising, was shot in the spine, paralysed, and transported out of the Warsaw – and he survived. Fortunately, thanks to his and his mother’s efforts he regained mobility, although he was left with a permanent limp. He never removed the bullet either, it remained in his back for the rest of his days.

You’re fired

The AKAT-1 computer, photo: Wikipedia
The AKAT-1 computer, photo: Wikipedia

After the war, Karpiński completed high school and in 1951 he graduated from the Warsaw University of Technology. He had considered becoming a composer, as he loved music dearly, but in the end he chose to pursue electronics. Even though this may seem unthinkable, as a former freedom fighter he had trouble finding employment upon graduating from university.

After World War II, the communist regime in Poland considered members of the resistance a threat to its existence, convinced that people who had risked their lives to free Poland of its Nazi oppressors could also act to undermine the new Soviet regime. Eventually though, after being thrown out of several workplaces because of his wartime past, he was finally hired at an electronics plant where he constructed a shortwave radio transmitter that proved good enough to be used by the Ministry of Foreign Affairs.

By 1955 Karpiński was working with the Polish Academy of Sciences, where he created, among other things, the AAH mathematical machine, which increased the accuracy of weather forecasts by 10%. In 1959, under the auspices of the academy, he also constructed the world’s first analogue computer which could analyse differential equations – the AKAT-1. The transistor-based device, stylishly designed at the Warsaw Academy of Fine Arts, was the size of a small desk and showed the results of its work on a built-in screen.

The construction of this computer prompted the Academy of Sciences to register Karpiński for a global technological talent competition organised by UNESCO in 1960. Of course, he won. As a result, he had the opportunity to go to the United States for two years and further his education at Harvard and MIT.

High toxicity

This is how Karpiński described his trip to America in an interview he gave CRN magazine in 2007:

I was treated like a king, which by the way made me feel quite uncomfortable. I was only in my early thirties. After I finished studying, I asked if I could visit a whole list of companies and schools. UNESCO agreed. At Caltech I was greeted by the rector and all the deans, in Dallas – by the city’s mayor. Everybody wanted me to work for them, from IBM to the University [of California] in Berkeley.

Karpiński was even let into what he described later as a ‘top-secret military and government research facility’ where he was able to familiarise himself with American work on artificial intelligence. This all goes to show how seriously he was treated by the Americans, who badly wanted him to stay in their country. Karpińśki, however, decided to return to his homeland, hoping that one day the communist regime would collapse and that his inventions would be able to serve a free Poland, not a foreign country.

In 1962, he returned to Poland and resumed work at the Polish Academy of Sciences. Two years later he presented his Perceptron, a transistor-based neural network hooked up to a camera that could identify shapes shown to it (e.g., a triangle drawn on a piece of paper) and learn by itself. This was only the second such device in the world (the other in the USA) but rather than getting a promotion for his accomplishment, Karpiński had to leave the academy – his superiors grew rather envious of his invention, so much so that he decided to find another place of work, one where the atmosphere was less toxic.

It’s impossible

Jacek Karpiński at the KAR-65 computer, photo: public domain
Jacek Karpiński at the KAR-65 computer, photo: public domain

Karpiński ended up at Warsaw University’s Institute of Experimental Physics. At the time, the institute was receiving far more data from the famous CERN laboratory than it could process. To solve this issue, Karpiński, together with a small team, constructed a computer that would analyse the data concerning collisions of elementary particles. It was ready in 1968, after three years of work.

Called KAR-65, the transistor-based machine was the size of two cupboards and was controlled via a console as big as a desk. It could conduct 100,000 operations per second and served the institute for the next twenty years. Even though this was yet another big success of his, Karpiński had no intention of taking it easy. When working on the KAR-65, he was already thinking about his next project: a computer that could fit into a briefcase. At a time when computers could take up entire rooms this must’ve seemed like ‘a completely crazy idea’. However, unlike his father’s idea to build a low-wing aeroplane, this one was to be realised, even if the road leading to it that was to be a winding one.

What Karpiński had envisioned this time was of a much bigger scope than his earlier technological inventions. He was looking to make a versatile micro-computer (a term used back then to describe computers comparable in size to today’s PCs) not limited to a specific scientific use. At the time a device like that would’ve been at the forefront of international technological development. The Institute of Experimental Physics couldn’t come up with the funds required to back such an ambitious project, so Karpiński decided to go to the army with his idea. Even though there was some initial interest in his micro-computer, the final decision was a no-go. The conclusion was based on the findings of a special committee appointed to review Karpiński’s idea, which argued that the project was impossible to realise, because… if it were possible it would’ve already been done by the Americans.

A little help from my friends

Photo on an advertising leaflet handed out in 1972 during the Poznań International Fair
Photo on an advertising leaflet handed out in 1972 at the Poznań International Fair

Karpiński, however, wasn’t going to give up on his project that easily. With the help of a well-connected British friend, he managed to present his idea to a computer specialist in England. Unlike the committee back in Poland, they were enthralled by it, recognising it for what it was – a brilliant design. Karpiński could’ve set up shop on the Isles as the Britons were eager to manufacture his product, but he decided to go back to Poland. Acting on the same principles that made him return from his educational trip to the USA, he wanted to take one more shot at persuading the communist authorities to give his project the go-ahead.

Eventually, thanks to the British specialists’ seal of approval and the help of another friend, a journalist by the name of Stefan Bratkowski who opened some important doors for him, Karpińśki was given the green light. Thus in 1970, the Microcomputers Plant was established. Located in Warsaw, it employed Polish workers but used British components and financing – the required parts weren’t available in Poland and the communists weren’t at all eager to throw money at the project.

A printer, camera, radar

Jacek Karpinski, creator of the K-202 minicomputer, at the Poznań International Fair, photo: Aleksander Jalosiński / Forum
Jacek Karpinski, creator of the K-202 minicomputer, at the Poznań International Fair, photo: Aleksander Jalosiński / Forum

Within a year Karpiński delivered on his idea. He and a team of engineers including Zbysław Szwaj, Elżbieta Jezierska, and Krzysztof Jarosławski, had worked day and night, convinced that they were doing something remarkable – Karpiński’s enthusiasm proved infectious. The result of their efforts was the famous K-202 computer, at the time of its creation an absolutely exceptional piece of hardware.

The K-202 could conduct a million operations per second – many more than the PCs that became popular a decade later. It was designed to be modular, which meant you could connect or disconnect various components of it: memory blocks, ports, etc. Today this may seem obvious but back then it was a revolutionary solution. The 16-bit machine also made use of paging, which according to the Collins English Dictionary is ‘the transfer of pages of data between the main memory of a computer and its auxiliary memory’. Thanks to Karpińki’s skilful implementation of this method of increasing memory, the K-202 could have up to 8 MB, whereas other micro-computers at the time had no more than 64 KB.

It’s believed that Karpiński paved the way for today’s common use of paging in computer memory systems. On top of all this the K-202, running on Karpiński’s original operating system, could have various peripheral devices connected to it: a camera, a printer, even radar. The computer was a multi-purpose device – it could’ve been used in an office or for engineering work. It was also very reasonably priced but, most importantly, its system unit, the case containing the system’s bowels that was connected to an external monitor and keyboard, could fit into a briefcase, just as Karpiński had promised.

Pour tea on it & throw it off the table

The Wrocław Electronics Plant Elwro existed in the years 1959-1993 and produced e.g. calculators as well as Odra and RIAD computers. In the image you can see the mounting of components. Photo: Stanisław Kokurewicz / Forum
The Wrocław Elwro Electronics Plant existed in the years 1959-1993 and produced e.g., calculators as well as Odra and RIAD computers. In the image you can see the mounting of components. Photo: Stanisław Kokurewicz / Forum

One would expect that for such an achievement in the greatly important field of computers sciences Karpiński would get some sort of recognition, a bonus at least… After all thanks to him Poland under the communist regime had the opportunity to become a world leader in technology. Unfortunately, that wasn’t the way things worked back then. There were powerful forces at play trying to ruin Karpiński’s success.

When Karpiński showed the K-202 at the Poznań International Fair in 1971 it drew way more of the authorities’ attention than its main Polish competitor, the slow and bulky Odra. The press was excited, here’s what the weekly Perspektywy wrote about it:

A micro-computer based on fourth-generation electronical components was made, it is the most universal machine of its kind in the world. It counts with the speed of a million operations per second, a result that can be matched only by the American minicomputer Super Nova and the English Modular One.

The manufacturer of Odra, the Elwro company, was, however, better connected with the regime than Karpiński. Instead of improving its product to catch up with the competition Elwro began to subvert Karpiński’s position in any way they could, not shying away from slander. Moreover, the Soviets wanted to introduce a single computer to the entire Eastern Bloc, a crude rip-off of a by then already outdated IBM machine. Called RIAD, the device constructed by Nikolai Lavronov was K-202’s opponent. Its constructor even got to see the revolutionary Polish micro-computer during a trip to Poland. Here’s how Karpiński described that moment as reported by the Puls Biznesu daily in 2008:

Lavronov couldn’t help but wonder how I could fit into a briefcase what he required a whole wall of space for. When I poured some tea over the K-202 and then threw it off the table, his eyes went wide. The computer was still working.

Karpiński’s computer could be so small and resilient because it used Western components. Even though they were vital to the functioning of the K-202, they might have raised suspicion among the authorities of the Eastern Bloc, as they were elements imported from beyond the Iron Curtain and used in the sensitive field of information processing. Also, it didn’t help that during a visitation of communist dignitaries to his factory Karpiński called one of them ‘fit only for constructing chamber pots’.

Real pigs

The KAR-65 computer at the Warsaw Museum of Technology and Industry, photo: Wikipedia
The KAR-65 computer at the Warsaw Museum of Technology and Industry, photo: Wikipedia

This all lead to the shutdown of Karpiński’s operation in 1973. He was escorted out of his factory by men armed with rifles who made very sure he wouldn’t and couldn’t retrieve any sensitive information or components from his workplace. All of the K-202s in production (about two hundred) were thrown away. Before then, only thirty such computers had been manufactured. As if that wasn’t enough, the communists also banned him from making any other computers and wouldn’t issue him a passport. As a result, in a gesture of protest, Karpiński and his wife Ewa moved to the countryside, where they farmed pigs and chickens. He once said to a journalist that visited him at his new place of residence that ‘he preferred real pigs’.

He finally received a passport in 1981 and left Poland, a place where at the time it was impossible for him to pursue his vocation – the creation of electronic devices. Karpiński moved to Switzerland where he designed among other things the Pen Reader, a hand-held scanner that scanned text from paper onto a computer. Soon after the fall of communism, in 1990, he returned to Poland wanting to manufacture this device, which predated the first Japanese scanner of the kind by more than a year. Unfortunately, due to credit problems, Karpiński didn’t manage to do so and even lost the house in Warsaw where he had lived after his return. Eventually he moved to Wrocław, where he made a living designing websites.

Certainly a genius

Jacek Karpiński, Warsaw, 1993, photo: Grzegorz Rogiński / Forum
Jacek Karpiński, Warsaw, 1993, photo: Grzegorz Rogiński / Forum

Jacek Karpiński passed away on 21st February 2010. For the bravery he exhibited in the Warsaw Uprising he was decorated with three Cross of Valour medals.

He certainly was a genius, a genius who had a clear vision of what he was doing and was utterly absorbed by it.

That is how Diana Wierzbicka, who worked with him on the construction of the KAR-65, remembers Karpiński in the film about his life. This device, as well as the AKAT-1 and a K-202 are in the collection of Warsaw’s Museum of Technology and Industry. So in the end, it’s turned out how Karpiński wanted it to: the communist regime is gone but his technology made under it exists in a free Poland.

Author: Marek Kępa, June 2017

Adblock test (Why?)

Read the whole story
GaryBIshop
1 day ago
reply
Great story! I'd like to see the film.
Share this story
Delete

Alexa had “no profit timeline,” cost Amazon $25 billion in 4 years

1 Comment
In this photo illustration, Echo Dot smart speaker with working Alexa with blue light ring seen displayed.

Enlarge (credit: Getty)

The Amazon business unit that focuses on Alexa-powered gadgets lost $25 billion between 2017 and 2021, The Wall Street Journal (WSJ) reported this week.

Amazon claims it has sold more than 500,000 Alexa devices, which included Echo speakers, Kindle readers, Fire TV sets and streaming devices, and Blink and Ring smart home security cameras. But since debuting, Alexa, like other voice assistants, has struggled to make money. In late 2022, Business Insider reported that Alexa was set to lose $10 billion that year.

WSJ said it got the $25 billion figure from "internal documents" and that it wasn’t able to determine the Devices business's losses before or after the shared time period.

Read 24 remaining paragraphs | Comments

Read the whole story
GaryBIshop
3 days ago
reply
A wrong plan is better than a vague plan.
Share this story
Delete

Microsoft on CrowdStrike outage: have you tried turning it off and on? (15 times)

1 Comment
Image: Channel 4 / Talkback Thames

Have you turned it off and on again? That familiar refrain from IT departments and The IT Crowd is being echoed by Microsoft today as a recommended way of fixing the faulty CrowdStrike update that has taken down thousands of Windows PCs and servers today.

In a support note on Microsoft’s Azure outage page, the company says it has heard from customers that rebooting virtual machines and PCs multiple times can help. “We have received feedback from customers that several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage,” says Microsoft.

If rebooting a machine 15 times doesn’t do the trick, Microsoft recommends the workaround that many IT...

Continue reading…

Read the whole story
GaryBIshop
7 days ago
reply
I loved that show!
Share this story
Delete

Electric eels inspire novel “jelly” batteries for soft robotics, wearables

1 Comment
closeup of colorful strand held between fingers being stretched

Enlarge / Researchers have developed soft, stretchable "jelly batteries" that could be used for wearable devices or soft robotics. (credit: University of Cambridge)

Inspired by the electric shock capabilities of electric eels, scientists have developed a soft, stretchable "jelly" battery ideal for wearable devices or soft robotics, according to a new paper published in the journal Science Advances. With further testing in living organisms, the batteries might even be useful as brain implants for targeted drug delivery to treat epilepsy, among other conditions.

As previously reported, the electric eel produces its signature electric discharges—both low and high voltages, depending on the purpose for discharging—via three pairs of abdominal organs composed of modified muscle cells called electrocytes, located symmetrically along both sides of the eel. The brain sends a signal to the electrocytes, opening ion channels and briefly reversing the polarity. The difference in electric potential then generates a current, much like a battery with stacked plates.

Vanderbilt University biologist and neuroscientist Kenneth Catania is one of the most prominent scientists studying electric eels these days. He has found that the creatures can vary the degree of voltage in their electrical discharges, using lower voltages for hunting purposes and higher voltages to stun and kill prey. Those higher voltages are also useful for tracking potential prey, akin to how bats use echolocation. One species, Volta's electric eel (Electrophorus voltai), can produce a discharge of up to 860 volts. In theory, if 10 such eels discharged at the same time, they could produce up to 8,600 volts of electricity—sufficient to power 100 light bulbs.

Read 7 remaining paragraphs | Comments

Read the whole story
GaryBIshop
9 days ago
reply
It's embarrassing when these writers get electricity wrong. And such a stupid statement! One produces 860 volts so 10 could produce 8600! Wow, you multiplied by 10! Amazing! I can "power 100 light bulbs" with 1.5 volts of electricity. Power is the relevant concept, not voltage. I expect better from Ars.
Share this story
Delete

Panic at the Job Market

1 Comment

“I have the two qualities you require to see absolute truth: I am brilliant and unloved.”

ready for another too-long article about personal failure while blaming the world for our faults? let’s see where we end up with 7,000 9,000 10,000 11,500 words this time.

this post is sponsored by me trying to not get evicted. funding appreciated: https://github.com/sponsors/mattsta

TOC:

Job Openings vs. Interest Rates

how are you doing, fellow unemployeds? enjoying riding your bikes midday past the three-piece suits?

Interest Rates

so, uh, what’s going on?

Basically, all the “free money” went away when the gubbment mandated interest rates go from years of declining-or-near-zero percent to now over 5% (curiously, a 5% increase in the fed rate also caused all credit cards rates to go from 9% to 30% over the same timeframe. what world).

Why would interest rates cause jobs to go away? Remember an interest rate is essentially “the price of money” — a higher interest rate means money is more expensive itself. Also with higher interest rates, organizations with millions and billions of cash sitting idle can park their money in safe government-backed interest accounts to grow their balances risk-free instead of taking on risk assets seeking outsized returns.

What counts as risk assets avoided during high interest rate periods? Well, funding companies with uncertain futures is a pretty risky asset. So, at times of high interest rates, the weaker companies collapse, strong companies use high interest rates as an excuse to “clean house” every 10 years, then a couple hundred thousand previously high compensation workers discover there are no jobs for anybody anymore over the next 2-4 years.

By the power of drawing two lines, we see correlation is causation and you can’t argue otherwise:

interest rates go up, jobs go down. never a miscommunication. you can explain that.

Company Structures

Why do interest rate increases specifically destroy tech jobs while other sectors like minimum wage part time jobs or construction jobs have been increasing over rising interest rate periods?

Tech companies fall into one of four categories:

  • nepo companies where your friends have unlimited money, so you get to live in a fantasy world of building unrealistic unmarketable dreams using family nepo money. You never have to really interface with reality to ever be proven wrong. See things like the VC funded grilled cheese startup or the VC funded baby food dispenser or the nextdoor neighbor VC funded medical fraud or the VC funded megalomania “i will become king of Earth” desk renting company or the VC funded stanford drugs-and-sex crypto club not to be confused with the VC funded berkeley apocalypse cult, etc.
  • speculation companies where somebody has an idea, but no product yet and no customers, and also no idea if there’s a market for the product, but they get money to try and build an organization to find out if the idea will work anyway.
  • initial growth companies where you had an idea, it starts to become popular, so you trade some ownership in exchange for cash/capital funding for growing your ideas into generating as much revenue as possible. This is also where things like the world famous YC incubator and 500 Cats and Techmoan come into play. Initial growth companies are the next step up from a speculation company after products begin experiencing actual customer demand.
    • fun fact: before the modern “tech accelerator” era, each big city, not named san franciso or boston, had something like a “startup boss” where you’d potentially pay hundreds to thousands of dollars to “present your idea” to the local “angel funding mafia” and then, if you passed, they would give you $50,000 in exchange for taking 80% of your company. This status quo lasted for decades until YC proved the system was essentially fools funding fools. The other fun part: after the “startup mafia” in your city legally owned 80% of your company, they would let you work as founder for about 6-18 months then just fire you and replace you with their own friends. At the first 3 companies I worked at, the CEOs were failed lawyer friends of VCs, installed by the VCs to replace the original founders, due to VC using exploitative legal trickery at every turn possible.
  • stable era companies with a repeatable GTM capability with access to customers and recurring revenue cycles. Stable era companies are essentially what normal people think of as “a company” — a self-supporting corporate entity with growth and stability for all involved (usually).

Each category of company has its own benefits and drawbacks.

Related to interest rates, the less successful a company, the more it relies on VC funding, and during high interest rate periods VC funding tends to slow down or vanish completely. Higher interest rates also means your customers have more restrictive spending conditions, so your customers will pull back, reduce, or leave too. Interest rate growth kills speculation companies and truncates or collapses initial growth companies into baseline sustenance mode until the economy opens up to take on more risk again.

Company Level to Compensation Scale

When companies have different sizes, funding, revenue, and war chest balances, it all impacts how much compensation lowly employee scum can receive.

For ease of brain math, we’re going to roughly normalize fully loaded compensation to a per-day rate instead of per-year because the numbers feel more meaningful on a daily scale (and just remember to divide all compensation numbers in half to account for employment taxes).

Nepo Company Pay Scale

If you are a fancy imaginary nepo company, there’s no limit to what you can pay employees because your entire company is a weird in-group fictional entity. You can be paid $100 per day or $100,000 per day depending on your personal connection to the power structures. There’s no real conclusions to be drawn from imaginary companies paying their friends whatever they can.

Nepo companies are the most frustrating because they suck up all the media attention for being outsized celebrity driven fads (and they also set the tone for accelerating unsustainable fad-to-failure cycles).

These companies leave you thinking “i could have made much better products with so much less wasted funding. too bad I wasn’t born rich/connected/bro-lyfe.” These companies, due to their imbalance of meta-funding-vs-tiny-impact, usually explode in a huge exposé about fraud and corruption leaving the founders no choice but to either fail upward forever or do not pass go.

Speculation Company Pay Scale

The smallest form of companies (pre-revenue, pre-growth, pre-customer, barely with an idea) are speculation companies doing what they can to get by every day.

Speculation companies are places to work when you’re a crazy child and so ambitious for a juvenile. You won’t see Tim Apple going to work for two 21 year old code bros living in Peoria trying to disrupt Google.

Working at pre-revenue low-idea companies is a real loser move if you have any personal marketable experience at all.

These companies will have a pay scale somewhere around 50% to 80% of market rate yet expect 200% to 500% more work than any other professional setting. If you’re not living in a $600/month studio apartment, it’s not worth your time (unless you have some insider information about the organization being a secret tax avoidance or money laundering acquisition scam. sometimes zero-product zero-revenue companies go from no idea to being acquired for a couple billion dollars in a year or two — this is a common pattern for “academic researchers” to form a fake company, vest their shares, then get acquired on basically a huge tax-advantaged hiring bonus at some big tech).

Initial Growth Company Pay Scale

Initial growth companies have been the ruin of many a poor boy.

Initial growth companies are the worst combination of high-risk, low-reward effort-vs-compensation tradeoffs. Sadly, getting trapped in underperforming initial growth companies is something I never realized was a lifetime risk until way too late to save myself. Now I’ve got nothing to show of my life of work, while other people who just picked a better company to work at 20 years ago and never left have been growing their wealth by a couple million dollars per year every year for almost their entire career, all working as just some rando middle manager at multi-trillion-dollar companies.

Initial growth companies are often unprofitable and just huffing VC fumes to stay alive waiting until some combination of “hopes and dreams” activates then the company “takes off” and everything just works out.

While you’re working at an underfunded, under-compensated, low-growth company just to get by, other people you know will be working at multi-trillion dollar companies making 5x to 50x your compensation for doing the same work or less.

The primary purpose of initial growth companies is only to “make the founders rich” and practically nothing else matters.

There are sub-categories of initial growth companies though:

  • growth uptrend, which is potentially interesting if a company is actually working and will survive, because some initial growth companies do turn into real stable era companies (which is the gambler’s lie under-performing downtrend companies really hype to all employees: JOIN US WHILE WE ARE SMALL! WE WILL GROW FROM -$12 million income per year to $300 trillion income per year in six months!!! CATCH THE WAVE!).
  • growth downtrend, which is the worst for all involved because you will end up in a decaying cycle of low compensation combined with serial company layoffs because the company is just failing, but refuses to outright cease to exist (working at these is called “wasting your life”).
  • stable but zero-growth, which is interesting but not useful. There is no such thing as a zero-growth low-scale company. If there is no growth, you can’t hire or have compensation increases. Likely, if there is no growth, the entire corporate plan will be adjusted into either “coast forever” mode where all employees get reduced to a skeleton crew and 90% of others are laid off so the founders can just collective passive income forever (or sell the company off for parts in a break-even VC acquihire maneuver). Meanwhile, other people you know working in real companies will continue having exponential compensation growth due to their free passive yearly stock allocation on liquid markets.

I guess one rule of thumb is it doesn’t make sense to work for companies who aren’t listed on the US stock market. The best time to join Apple was 25 years ago. The next best time is today? Who knows. What is it like to join a company where all the co-workers your same age have made $10+ million over the past 4 years while you are joining with nothing?

Stable Era Company Pay Scale

Stable era companies are long-lived organization not subject to going bankrupt due to quarterly market trends or government economic policy changes.

Stable era companies also have sub-categories:

  • stable stable, which is consistently growing, consistently profitable, and paying employees $5k to $10k per day at current full comp market rates. These are largely flying under the news radar. These companies aren’t Google or Apple, but rather some tractor company or heavy manufacturing company just churning out results for years without destabilizing the world. Stable stable companies do that thing where every quarter they “beat expectations” on their stock reports by a coincidental $0.01 just to prove they are always growing.
  • stable unstable, stable-qua-ego, which is a combination of a popular company controlled by a popular “celebrity CEO” figure. These companies tend to be as manic/bipolar/depressed as their CEOs floating between mega growth phases to mega collapse phases then back to mega growth phases in cycles of 6-18 months each. The growth/collapse phases usually don’t impact your $10k to $20k per day compensation unless you get caught in a “Year of Efficiency” as euphemism for laying off 50,000 employees while the CEO continues to spend $100 million per year buying private islands.
  • stable neutronium, which is when a company controls one or more sectors of the global economy and just can’t be broken. They tend to have stable management, exponentially growing stock prices, and thus exponentially growing compensation for useful employees.

Under the modern tech landscape, stable “hyperscale ultra-growth” companies are paying experienced employees the equivalent of $10,000 to $50,000 per day if we include the value of their exponentially growing yearly stock grants. Meanwhile, back in the real world, other companies argue salaries shouldn’t increase for 10 years because “who needs money anyway” or “developers are too expensive” so you are stuck with 10 years of no practical salary growth and no viable passive stock grants (if you can get a job at all).

Big companies are a gift and a curse because with great size comes great ability for outright economic capture manipulation. At a fundamental level, everybody kinda knows big companies aren’t the best places for “good” things because the default mindset is big company bad so we continue ignoring corporate exploitative profit capture is the inevitable way of the world.

How to Get Hired

Tech jobs are paradoxical because everybody agrees on three things:

  • the tech job requirements are completely broken
  • the tech job interview process is completely broken
  • yet, every company follows the same hiring process and posts the same job requirements

I think the entire problem of modern tech hiring comes down to the midwit meme:

Let’s ignore the “IQ” axis and just consider “capability points” or some metric for scaling experience and ability.

The key to midwit meme humor is always the “most advanced” people often use simple solutions indistinguishable from people who don’t know what they are doing. Average people are often in the “knows enough to be dangerous” category by over-thinking and over-working and over-processing everything out of lack of more complete experience to discover simpler and cleaner solutions. We find the midwit problem in job interviews all the time where interviewers think they are “elite special evaluators” needing to gatekeep the unwashed hoards of desperate candidates, but interviewers often can’t reliably judge or measure people who have better answers than they expect.

Scaling Laws

According to all the interviews I’ve failed over the years (I don’t think I’ve ever passed an actual “coding interview” anywhere?), the entire goal of tech hiring is just finding people in the 100 to 115 midwit block then outright rejecting everybody else as too much of an unknown risk.

If you fail the weird mandatory performance-on-demand interviews full of random tasks unrelated to the actual job role, interviewers immediately assume you are a 0 to 55, but they can’t actually tell if you are in the 120 to 200 range instead — especially in the case of, well, what if the interviewer has less experience or less knowledge or less ability than the candidate, so the interviewer just can’t detect high performing people?

Personally, I’ve had interviews where the hiring manager seemingly doesn’t know how anything works but they are also in charge of the product architecture? You ask why their platform has a dozen broken features when you tried to use it (and it overcharged you by thousands of dollars a month for services not even provided), but you just get blank stares back because the 24 year old “lead senior engineering manager product architect” doesn’t actually know how systems, platforms, architecture, networking, dns, ssh, monitoring, usability, observability, reliability, or capacity planning works? Then, of course, you get rejected under some false pretense of “not having enough experience” when you’re trying to promote developing fixes to their seemingly decaying platform.

Modern tech hiring, due to industry-wide persistent fear mongering about not hiring “secretly incompetent people,” has become a game divorced from meaningfully judging individual experience and impact. Most tech interviews are as relevant to job performance as if hiring a baker required interviewing them about how electron orbitals bind worked gluten together then rejecting bakers who don’t immediately draw a valid orbital configuration.

I remember fully giving up on ever interviewing at Google after an interviewer just barked graph theory questions through a low quality speakerphone with their laptop right next to the microphone so all you could hear is loud typing and an angry man complaining you aren’t solving their irrelevant questions fast enough. The entire industry just kinda accepts candidates should have negative and personally degrading interview experiences where candidates are undermined by some vague sense of social superiority from the interviewers. Sure, it would be great to have big tech $30,000 per day comp packages, but they long ago decided to prefer hiring the wrong people passing the right tests instead actually evaluating people around experience and capability and ambition.

The Great Attractor

The weirdest part of watching the tech interview landscape change over the past 20 years is it keeps getting worse everywhere all in the same ways.

“big tech” created the concept of “the coding test” as the primary interview criteria. Why? Google was founded under the mindset of being a “Post-Grad-School Grad School” so they demand everybody pass a mini secondary GRE to walk amongst their austere big wrinkle brain ranks. Soon after Google became successful, the founders joked they wouldn’t be able to pass a Google interview test anymore. Yeah, real funny guys, making a system so selective for anti-ability where useful people can’t even get in the door anymore, great job everybody.

Now, every company from 3 person speculative startups to 50,000 person big tech firms all use the same hiring practices which seems between somewhat inefficient to outright overbearing. In what world should I put in the same effort to join a zero-growth, unprofitable company going out of business in 3 years compared to joining a multi-trillion-dollar company paying 30x the compensation of a startup?

Much like how every company copied the google “big tech big interview over 6-12 weeks” cycle, companies have added an additional interview step copied from Amazon: the “behavioral interview” curse (also see: “the bar raiser curse” as well).

As far as I can tell, the “behavioral interview” is essentially the same as a Scientology intake session except, you know, for capitalism instead. You have to answer the same 8 questions at every interview around “so what would you do if you had a conflict at work?” where the interviewer treats you like a 5 year old learning about people for the first time instead of acknowledging you as a professional with 0.5, 1, 2, 3 decades of experience.

The current “behavioral interview” weirdness is somewhat of an offshoot from the original idea about “hiring for culture fit,” but traditional “culture fit” evaluation was just personal interviewer vibes around how candidates acted during an interview. We’ve all seen candidates who are uncooperative or excessively negative with no recourse or just weird in an anti-socially draining way, so clearly pass on actively dangerous people, but attempting to codify “is a person a good person” into call-and-response questioning is a fundamentally broken concept.

The actual goal of any “behavioral interview” or “culture fit” estimation is simple, but nobody ever lays it out. The goal of culture checking is only: determine how a candidate handles the tradeoffs between progress vs. kindness.

A secondary goal of the “behavioral interview” is personality homogenization where companies want to enforce not hiring anybody “too different” from their current mean personality engram. Yet, the tech industry is historically full of weird divergent people doing great things for their own reasons (though, due to just basic population growth over the past 20 years, there are tens of millions more “normie computer people” now offsetting all the smaller pool of original “weird computer people”). When you start enforcing personality bubbles outside of what somebody can do, you’re just doing some sort of weird economic eugenics thing (make programming weird again!).

The trick with “behavioral interview” is there are no true good answers. They want to watch you squirm. The answers depend on:

  • in your hypothetical scenarios about “resolving disagreements,” what is the power dynamic?
    • Am I asking the CEO why we have 50 sales people but only have 3 developers and I’d like more developers?
    • Am I criticizing an intern for blowing up the site for the 3rd week in a row when they didn’t follow documented deployment guidelines like we’ve reprimanded them for twice before?
  • in your hypothetical scenarios about “tell me a time you made a mistake and apologized” relevant to anything? I’m not here to interview about my past failures, so we either make up fake “humble brag” failures or tell you about actual failures which will give you more ammunition to think less of us and tank the entire interview process?
    • It’s the equivalent of bad bid-side negotiators starting with “tell me the lowest price you’ll accept” which isn’t how anything in the world works at all. Interviews are about showing your best side, not trying to micromanage a list of perceived historical faults to a point where you disqualify yourself via your own confessions.
  • What level of asking candidates to expose their raw past failures and arguments and disagreements just too much information? You don’t have a right to view the total perspective vortex of every decision in my life as employment criteria. If you want to guarantee team cohesion, build good teams, position managers so they have awareness of everybody’s strengths and weaknesses, and have your managers be experts in conflict resolution.
  • I think another goal of “behavioral interviews” is to show how much you enjoy compromise. There are compromise tradeoffs though. Finding middle ground between a broken solution and a working solution doesn’t leave you with a working product. Compromise can happen around opinions, but not facts. I always get the feeling interviewers want you to talk about some time you were right then gave up being right for a worse solution just so the other person/department feels better. Are we here to create good products or not?

sorry, I think I just failed your behavioral interview again. Good thing I can just interview at another company. Oh, wait, every company asks the same questions and demands the same answers from the same book on “how to do interview good” and if you deviate from the expected answers in the book, no income 4 u. good luck on your unemployment journey until the employment meta changes again.

Looping back to a point: is there some condition when having every company continue to add more interview steps unrelated to job tasks and experience and capability and ambition and insight will just collapse the entire industry? Sorry, you’re not qualified to be a professional software developer because you wore the wrong color shirt to the interview. You should know the color buleruplange is triggering to generation delta, so you clearly are not a culture fit for this job paying $120,000 per hour, and no, we will not be looking over your 25 years of professional experience at all.

Companies seem to forget they are also part of, you know, the economy and people need compensation to, you know, not die, right? If you aren’t acting as an economic engine for helping the most people thrive, what is your purpose as a company?

Everything Bagel and A Bag of Chips

You’ve seen them. I’ve seen them. I call them Everything Bagel job descriptions.

They go something like — As a Software Development Engineer (SDE) at Company, Inc, you will be required to:

  • truly love the SDLC and agile story t-shirt poker points
  • provide company-wide daily status updates on all your work from the previous day and describe what you plan to do in the next 6 hours
  • write our application (react, vue, typescript, node, python, react, rust, go)
  • maintain our existing applications (php, ruby, c)
  • optimize all new code and refactor all existing code for maximum performance
  • support customers directly who have application problems
  • support other employees who don’t know how computers work
  • deploy new infrastructure (aws, docker, containers, kubernetes, swarm, terraform, lambda, eventbridge, step functions)
  • monitor and optimize cost efficiency of all aws usage
  • maintain existing infrastructure (aws, docker, api gateway, dynamodb, mysql, elasticsearch)
  • create and maintain and monitor all CI/CD pipelines (github, aws)
  • guarantee infrastructure is always running (on-call 24/7 3 weeks per month)
  • guarantee infrastructure and application logic is always logged and monitored and alerted and observable
  • perform routine capacity planning
  • be responsible for application security
  • be responsible for security of all application dependencies (npm, pip, ubuntu, container images)
  • be responsible for security of all infrastructure (SOC-2 demands it)
  • conduct code reviews 3 to 10 times per day
  • mentor peers
  • manage yourself and manage your peers, but you also have an engineering manager and a project manager and the CEO is your skip-level manager and the CEO’s brother is also your skip-level manager too
  • code all the time and manage your own performance
  • continuously document everything so we can replace you with outsourced contractors at any time
  • monitor and maintain all 37 javascript SaaS plugins our website uses to track every user click and record mouse movement without the user’s knowing consent (plus the marketing and product teams enjoy adding 3 new javascript plugins to the website every month, so you must add them immediately when somebody requests it through a ticket without doing any technical evaluation on the 3rd party scripts or checking fitness for purpose or even if we could replicate the behavior in-house with less than 4 hours effort)
  • be grateful for this job and truly appreciate the opportunity to make $300/day in this role because we are all a family here at Company, Inc. (until the next round of snap overnight layoffs at least)

like, my dude, your single job requirements are actually 5 entire departments worth of work to be shared across a total of 20 people. yet, you see single-person job descriptions resembling this all over the place.

At some point, half the industry just gave up on the idea any technical person should specialize in anything. Just make it all up as you go along. It’s just typing, how hard can it be? You can’t just demand application developers also be part time amateur aws architects and expect good results. Experience in these roles is built over 5+ years at a time through focused work, but half the industry is now corrupted into “devops means DEVS DO OPS means OPS REQUIRES NO EXPERIENCE means NOTHING REQUIRES EXPERIENCE so FIRE OPS HIRE DEVS and just 15x ALL JOB RESPONSIBILITIES” (without any matching 15x increase in compensation, of course).

Such job descriptions also means: your job is physically impossible. You will always feel drained and incompetent because you can’t actually do everything everyday. You will always be behind because each of those bullet points can be multiple days of work per week just on their own (plus, how are you supposed to be productive in 35 different areas requiring months to years of experience if you actually want to be good at each task?). So, from day 1, you will already be about 4 months behind on your expected job responsibilities and you’ll never catch up. It turns into an endless game of managers and executives saying you are “underperforming” because you have 18 primary tasks, each primary task requires 4 to 20 hours of effort, and every manager wants their task done within 4 hours. You are setup to fail. What’s the point?

Maybe a point is some companies just shouldn’t exist if they can’t afford the fully staffed professional teams required to build and maintain their products? The worst secret in tech is amateur developers are happy to act like entry level workers across 20 arbitrary roles for years (in the absence of never having enough time to focus on building up long-term experience or best practices). You can’t get gud if you are always rushed from task to task without any chance of leveling up knowledge and capability through “deep work” as we would historically expect of professionals.

Field Report: Job Experience Notes

Here’s some things I’ve seen in “the real world” over a couple jobs. Some minor details have been altered to protect the guilty.

These are just to highlight how often companies have completely broken internal practices and don’t even know it. The solutions are fairly basic, but you can only see the solutions if you have actual experience knowing how everything works in the first place (and know where experience comes from? 5, 10, 15+ years of actually doing focused work and actually building systems from the ground-up over and over and over again—it takes completely re-building something 3-5 times over anywhere from 3 months to 5 years before you actually start to be good at a task).

How do you document real life when real life is getting more like fiction each day?

  • Company said their “site was slow” and they didn’t know why. I looked at their AWS metrics and the single database server running the entire company was one mysql database with 2 cores and 16 GB RAM they created 5 years ago and never touched again. Their database had grown to 3 TB on the slowest disks AWS offered plus their working set size had grown to about 50 GB. The database was at 99.99% CPU usage, and had been at 99.99% CPU consistently for the past two years based on the built-in AWS metrics. They never looked at metrics or performance because “We are a cloud native company so we don’t do low value non-product-related tasks like system administration.”
    • Solution: I upgraded the database instances to 64 GB ARM servers with reserved IOPS and their batch data transformation tools went from taking 16 hours to 20 minutes. Also the DB replicas went from having 4-6 weeks of replication lag (again, completely unmonitored for years) to real-time sub-second updates across regions.
  • Company said “their site was slow” and they didn’t know why. Turns out they had two database clusters: one for production and one for research. The research cluster had 8 instances costing $5,000 per month total. The production cluster had 2 instances costing $500 per month total. The research cluster hadn’t been used in two years.
    • Solution: I swapped the sizes so the production cluster now owned all the resources and the unused dev/research cluster had almost no resources. I also switched all instances to ARM so they were cheaper too. The production site went from 7 seconds for an interactive user response to 200 ms for a user response due to the increased capacity. The non-technical company owners had just accepted their system was slow for the past couple years without ever looking into possible fixes because, once again, “the cloud means we never have to manage anything. only agile story point product features matter.”
  • Company had a couple custom data processing tools. Their internal data processing API tools were installed on the same 2 EC2 2-core instances as their customer-facing web servers. The data processing tools were CPU heavy and teams complained it took 20 hours to run a full processing batch.
    • Solution: I moved the data processing tools to dedicated containers for auto-scaling via ECS-on-EC2 instances (using fastest EC2 instances available too instead of deprecated EC2 instances like their web servers) and it immediately auto-scaled up to 20 to 50 concurrent containers for processing live data instead of being stuck on 2 previous outdated servers only. The data processing times went from 20 hours to 30 minutes. They had been running the previous “only deploy internal APIs on our two 2-core web servers” pattern for 7 years.
  • More fun AWS architecture things at various companies like fixing subnet-vs-NAT-gateway deployments to save companies a thousand dollars a day just by altering network paths to avoid NAT gateway fees. Companies often don’t realize AWS isn’t hands off “zero-experience needed magic cloud;” AWS is actually “datacenter as a service” and if you don’t have experience managing datacenters and network architecture and disks and a dozen other system-level platform architectures then your AWS account is probably in a bad state.
  • Many common things across companies where they just did work 5 years ago and never updated their systems for modern practices. If you see people using requirements.txt in the wild instead of a poetry lock file or you see people using the decades-long broken python built-in logging framework instead of loguru you can see there’s opportunities for improvement everywhere (amusingly (or sadly) every seven-figure-comp-AI-developer project I see from “big tech labs” are all using amateur python programming practices from 10+ years ago and they don’t even realize it or care to fix it. 🤷‍♂️).
  • Company said their tool for extracting production data from mysql into redshift went from taking 40 hours last year to taking 80 hours this year and it continues to slow down by the month.
    • Problem: I ran their extraction script and it just did nothing. I added more logging. It did nothing. I added even more logging. The script stalled on startup at trying to SELECT COUNT(*) FROM <table with 3 billion rows> — this step took 11 hours alone because the “Senior Lead Data Scientist” who wrote this system (with 6 months of experience in charge of the entire department) didn’t know how databases or AWS or networking work at all (and wasn’t given permission from management to actually learn everything or focus on fixing problems over time when making the data transfer utility). Also, after the 11 hour COUNT(*) completed, they used the count result to only determine the maximum value to run a custom loop of SELECT * FROM <table> LIMIT N OFFSET K where they would write every result to the network EBS storage as a CSV in N batches, then load the CSV and re-save it as .csv.gz to EBS, then re-read the .csv.gz from EBS to upload it to S3. You can’t do this. It makes no sense if you have any experience building complex systems inside AWS (EBS is toxic for reads and writes like that, never create a system doing thousands of micro-writes and micro-reads to/from EBS) or even with mysql from 30 years ago. Also, this pattern of “count all, walk entire table with limit/offset” had made its way into a dozen other internal utilities at the company (copy/paste development ftw) so we had to track down every place it was used to do custom refactors for proper database cursor and streaming write logic everywhere.
    • Solution: I introduced them to the concept of “database cursors” so you don’t run an 11 hour COUNT(*) up front. Then you don’t have to abuse LIMIT/OFFSET queries. I introduced them to streaming S3 writes with in-line in-memory compression to avoid writing anything to EBS at all. The extraction process went from 80 hours to 40 minutes. Yet, the company still said they “don’t need to hire people with experience because everything is cloud native so anybody can just figure things out.” — This company tolerated a year of 80 hour data imports because “cloud native” means “experience doesn’t matter” and they didn’t have anybody in the entire company to even detect the problem because “only product features matter, we don’t want to hire low value sysadmins.”
  • Company has $10 million in funding and an initial boost of customers trying their products, then the customers start to go away. The solution to customers leaving, as mandated by the fail-upward VCs controlling the company: “hire more sales people.” So, over two years, the company expands their sales and marketing team to 50 people while keeping developer count fixed at 5 people. Eventually, the sales people haven’t closed a sale in 10 months (because the product is so weak nobody wants to buy it), so the solution is to continue the VC demands of hiring 3 more sales people per week plus start laying off developers because the VC mandate is “only sales matter, the product doesn’t matter, sales people can sell anything.”
  • Various variations on the above theme of “sales above all else” where companies double down on an idea that worked 2-5 years prior but is now just a dead idea in a dead company, but management refuses to expand or “pivot” their product to meet the market. These companies always fail due to executives just doubling down on their personal “executive genius” while developers are screaming “you aren’t letting us build products people actually want…”
    • Historical note: I’ve only worked at startups or initial idea attempted “scale-up” companies, so now my resume is now just a series of 5 completely dead companies nobody cares about, which looks great for applying to jobs. Dead companies doesn’t mean dead useless experience though, yah? Is it my fault I can’t get hired at good companies? I don’t know. It’s not my fault companies fail though. I try to help, but when VC brain disease runs all companies into the ground instead of prioritizing building good products on high performance platforms, there’s only so much you can do before leaving dying companies (then repeat at the next dying company forever?).
  • On the same theme of corporate priority: rudderless companies often end up in a situation where they are so desperate for sales, they try to incentivize the sales people with outsized commission packages valued at 200% to 500% of what developers in the company make (meet your sales goals for a seven figure bonus! sorry, developers don’t have a bonus program because the company only cares about sales, but how about another 0.0003% in dead equity after our 3rd down round?). At this point, there’s no value in remaining at a company if sales people are making more than people creating the product(s).
  • I’ve rejected candidates during interviews too, so I’m not running around advocating everybody should be hired blindly. I’ve seen multiple candidates who refuse to turn on their camera even though it was listed as a remote job with such expectations. You start to think HR just let too many scammers through. One said “I am not allowed to turn my camera on because I have a pact with my husband I am not allowed to be seen on the Internet” (this person also sent a 30 page resume which looked like “i do everything, so hire me” from a contractor remote job scam harvesting operation). Another no-cam person said they were “in transit” and refused to turn on a camera even though they were calling through the Zoom app. Sometimes you also have people who just apply too high and hope to get through. We had one person apply for a Senior Python Data Engineer role and they couldn’t name any version of python (they said “something like 2 or 3? i don’t know.”) — those details matter when huge language features and behaviors change once a year with every major point release! don’t be bringing your outdated Python 3.7 sand castle to my Python 3.12 cathedral.
  • Company founder/CTO who just gets bored on the weekends and starts personally modifying things, breaking live services, then demands developers immediately all go to the office Saturdays at 6pm to fix what he broke because “the company is down.”
  • Company with bespoke internal platforms for customers where each customer needs its own codebase. The company started by just copying the previous customer codebase into a new customer codebase the first time this happened. Then it happened again and again. Now, 7 years later, the company has 12 repositories all doing 95% the same thing with 5% custom code per customer type. Due to their process being “copy the most recent codebase into a new repo,” the 12 repos have 12 different package lock files and 36 different container deployment strategies ((dev, prod, testing) * 12) and use 5 different versions of Python and each codebase has fragments of bugs sometimes fixed and sometimes not across all the repos.
    • Solution: enough is enough, I gave everybody two days off and just ran through 20 hours of aggressively merging and validating everything into one repository with one package lock file under the most modern python version with 3 clean container deployment strategies covering all customers at once (plus getting type safety checks added everywhere, making common shared modules instead of copy/paste everywhere, centralizing metrics and reporting instead of ad-hoc hand-written output everywhere).
  • Company in California with a motto of “never hire Americans because 16 year old outsourced Croatian interns know everything already” asked me to review their architecture. It wasn’t great. Their entire platform was built on 7 year old ubuntu EC2 instances that hadn’t been rebooted in 5 years. They technically had logging, but they had no alerting and no metrics. They basically only looked into product problems if customers complained (but customer complaints had to go through the “account manager” to the “product team” to the “product manager” to the “project manager” before developers would be notified of problems).
    • Solution: I added a combination of EventBridge hooks triggering Lambda Functions to post live AWS service warnings directly into a chat channel. I added a log parser so they would also get real time application notices/warnings via chat when customers were seeing broken features. I added metrics to a dozen of their internal microservices and setup an “infinite scale” prometheus/victoriametrics cluster to collect and log and alert and graph trends over time. The company thought they had 10,000 users per day based on what their Croatian interns were telling the CEO based on a dozen of their front-end-only javascript/spyware analytics plugins, but my internal metrics showed only 300 users per day actually used the backend APIs. great job, everybody! The alerts also showed the company when the CEO’s brother would do things like delete all user data by mistake at 7am (remember to always give your family members full root admin access to all services so they can use generic desktop app database GUIs to edit user data directly, who cares if they delete users or corrupt data all the time?). The alerts also showed fun things like when the “Data Science Team” would write SQL queries over a small 64k row table yet mistakenly generate 12 billion rows of results due to how they wrote 5-level-nested JSON cross product column extraction queries (show me a love story more true than data science people adding nested JSON columns to relational databases).
  • The saddest part is when companies have a good chance at surviving a mild downtrend, but either remain so committed to the past or too committed to executive dysfunction where nobody can steer towards a more reality-based future-oriented product direction. I’ve seen companies making $5 million a month in revenue just evaporate over 3 years because their original idea falls out of popularity and the executives refuse to adapt to better ideas. I’ve seen companies go from a billion in funding to being sold for parts because the CEO is too busy buying horses to keep his wife happy instead of actually running the company. I’ve seen companies just give up and get acquired by some big tech because the CEO was offered a personal $100 million gift when he agrees to the acquisition, then 90% of the company is let go after the acquisition. Feels great to work forever on the zero-reward side of a winner-take-most-but-the-winner-aint-you professional economy.

Perhaps the saddest thing is it looks like I am a full time architecture-only, no-product-value person, but it’s only because everybody else in these companies has no architecture experience and doesn’t care to learn. Sure, I have 20 years of experience is datacenter build-outs and system architecture and high performance platforms, so when all these modern “cloud native” companies insist on only hiring “app dev javascript” employees without any real platform building experience, we see why all these modern “cloud native” orgs end up with completely broken infrastructure (at least until somebody who has actually accomplished something shows up to fix everything for them—most of these places are stuck in a loop of “we don’t know what we don’t know” so they are unaware of even the right questions to ask or the right systems to build in the first place).

I don’t know about you, but I care about company structure, product issues, platform scalability and reliability, customer usability, developer usability, performance, legal compliance, security, corporate scalability, and team cohesion all at the same time. These are issues which don’t appear on your story point burndown boards because they are cross-functional improvements across multiple departments. Somehow constantly improving and fixing and making everything better in ways nobody else can seem to do isn’t enough to get through a company’s hiring cycle these days though.

Also amusing: I think people will have different reactions to the block of anecdotes above. People will either be “heck yeah fix all the things!” or people will side with “you are clearly incompetent because you didn’t use ‘We’ statements anywhere and you only talked about how you did all the work yourself.” Well, for one these are just highlights. and, for two, there’s only so much “We” you can bring into every task when your team is 4 people having 16 months of tickets backlog’d with management mandating daily story point accounting updates. When you can do individual optimization tasks 3x to 5x faster than sharing the work (no need for synchronizing meetings and knowledge and project setup duplicated across 5 people) sometimes just doing the work and moving on is the best use of resources. We don’t have time to stop the entire product roadmap for “everybody on technical staff fixes everything” because the company refuses to build out actual infrastructure and platform teams (thanks, devops mind virus), so we end up with “single capable person doing the work of what should be 20 people across 5 departments alone” patterns. Sometimes good is just good and you get the work done by doing the f’ing work by any means necessary.

to apathy, to entropy, to empathy

TRAUMA is when an experience is SO CONTRADICTING to your sense of CONTINUITY that your internal narrative is effectively DESTROYED

I don’t think it’s controversial to say the employment meta of the tech industry has changed over the past 10, 15, 20 years.

What used to be as simple as “i good wit commputr. u giv jorb?” is now a synthetic convoluted social status driven hierarchy of mind games just to get an initial interview then you are treated as a blank slate having to prove you can even read and write and speak from first principles.

Most interview processes don’t even consider a person’s actual work and experience and capability. You must always open your brain live in front of people to dump out immediate answer to a series of pointless problems because if you can’t solve a pointless problem with no preparation you clearly can’t do anything of value for the rest of your life.

Most of the interview tricks in use now are being repeated by people who learned “how to do interview” from books or 15 year old online interview guides. Modern interviewers don’t seem to always have the full picture of what they are even trying to accomplish from the interview. I was there when people made all these things up the first time in 2006-2009. You are trying to use trickery on me which we made up to stop people less capable than us, and now we are trapped having to constantly prove we aren’t scams when we clearly aren’t scams but nobody trust provable history anymore. Do not cite the Deep Magic to me, Witch — I was there when it was written.

Is my distributed hierarchical topological sort automatic dependency resolver data processing system not relevant to experience?

Are my personal improvements to CRC processing not relevant to experience? This work got included by global file formats and people have also decided to use my code to help validate information from the space station. Sorry, it doesn’t count towards any interview capability score though.

What about a CLI for stock market trading both manually and with automation I’ve been noodling on for a couple years and keep improving by the month? Sorry, not relevant to showing your capability, you must reproduce the answer to our toy problem about moving chess pieces through a grid world in 20 minutes or else we think you are just incompetent.

(rebuttal people will make: but those are just tasks and those do not illustrate your love for the entire SDLC? did you even assign story poker t-shirt burndown points to tasks? do you even create issues before every change so every commit has an issue number then every commit becomes a PR then every PR goes through CI then is merged after a code review? you can’t just “do things” without 18 management steps between every task!)

Somehow the entire technical industry continues to coalesce around social/political status and power instead of actual experience and ability. You’ve created software used by millions of people? Your software has been resold by every “cloud” for the past 10 years? Sorry, we can’t hire you because you didn’t espouse sufficient emotional loyalty to our One True Founder Kier Eagan. Good luck with your sad little life creating things people use because we only pay for cult loyalists here and you don’t socially qualify. The goal of hiring should be to create more of an us atmosphere and stop excluding people by dreaming up unnecessary technicality trivialities — be an us for once instead of a them!

How do we try to exist when we need income, but almost all jobs have become impossible fantasy roles overloaded with divergent tasks trying to cram 5 departments worth of concurrent work into single people? Is it time to rip up our tech industry membership cards and just leave all the roles to overworked amateurs who only know the broken world since they haven’t ever seen a functioning company with professionals in single-purpose professional roles? I don’t know anymore.

Conclusion

Is the rambling complete? Has the stream of consciousness flow terminated?

Of course none of these are unique to me problems. As we started out on this article: the industry broke all the jobs because of too much tech speculation tied to speculative funding, so the funding went away, then the customers went away, now the jobs are gone. We have hundreds of thousands of developers not getting their “tech industry” level salaries anymore, so who am I to whine?

Well, I’m me and this is my site, so neener neener you can’t stop me. It’s a problem for all of us and we speak for all the worms.

The tech roles at most companies are completely busted it seems. I didn’t sign up to be a “software servant” to non-technical product teams who just define tasks and priorities for actually capable people to implement every day. Somewhere along the way the entire industry lost its heart and now most companies are more interested in “playing company” instead of actually carving out created creative contraptions? There is a thing called “Engineering-led management” where, imagine this, the people doing the work are also the ones defining the product and talking to users and defining requirements while implementing features all themselves. You can’t create good products if you have less capable “product idea people” controlling implementation capability. Office Space wasn’t supposed to be our destiny.

I’m probably actually fairly bad at being a single-purpose “SWE SDE SRE” or whatever weird made-up excessively narrow titles people have contrived now. It’s about building products and growing companies, which is difficult to impact when you’re given the mushroom treatment at 90% of companies unable to impact actual organizational and product change (because “know your place, low value menial developer, you just do what the superior project management and ceo vc family executives tell you to do, you can’t actually make decisions or improve usability or have ideas to expand products and fix company problems yourself. you aren’t the right shaped person to make decisions here!”). where is the developer azor ahai to right the world and tip the balance of power back to a majority of productive developers making decisions?

At some point, a switch flipped in the tech job market and “programmer jobs” just turned into zero-agency task-by-task roles working on other people’s ideas under other people’s priorities to accomplish other people’s goals. I feel tech employment was originally more focused around being given problems to find solutions from a “whole business view” then you did everything to generate products and solutions using your best judgment towards a deadline goal without being micromanaged every step of the way (because nobody else knew what you were doing or how to do it to even attempt to micromanage you). When did developers stop being part of the actual product creation process and instead just become “project management task workers?” The nexus of tech industry employee agency destruction seems to be centered around post-2010 tech “elite value capture” where the weirdos who would normally aspire to work on wall street for “quick riches” instead decided they could get rich by just being “thought leaders” to exploit technology. It’s difficult to believe non-technical “idea people” have any relevant capacity to control technical project direction over a long term sustainability horizon. You create good products by creating and using products at an implementation level because products you use and products you personally write are products you have the largest insight into improving the best and the fastest.

How do we deal with professional income inequality where the same role and effort pays $400/day at one company but $20,000/day at another company? At what point is it worth not even working if your compensation doesn’t work out to being at least $10,000 per day anymore? Do we just sit here and die in our overpriced studio apartments where rent increases 7% every year while other ICs doing the same work at better companies are buying 5 vacation houses from doing the same work?

The worst feeling is comparison. Comparison is the death of happiness, as they say. I look at my own place in the world compared to people who just started at Apple or Microsoft 20 years ago then never left, and now they have made eight figures just over the past 4 years while my life path has lead me to… practically nothing. Then the tech inequality continues to compound. Imagine joining a company where the teenage interns have already made a couple million off their passive stock grants and other employees have been making $2MM to $6MM per year over the past 5 years there, while you’re starting over with nothing again for the 5th company in a row so what’s the point in even trying. Though, did you know paying rent on a credit card still qualifies for points? Made $60 this month paying rent with credit card point rebates. whoops.

What’s next? Who knows. Maybe aliens will blow up the sun to strip mine all our solar lithium for their alien hive mind batteries or something.

What’s Worth Working On

Out of all the jobs available, it’s surprising how many of them aren’t the best thing to be doing. As we covered above, you usually want to work at a specific type of company (if you can) to get those $10k to $50k per day rates instead of being stuck at unprofitable companies only paying $400/day salary with no usable equity.

What’s actually worth working on though? Let’s do a rundown.

I have an interesting experiment in using non-tokenized inputs to transformer architectures which can be expanded to essentially creating conscious AI robots using arbitrary sensor data from cameras and microphones and encoders, but it would take a couple million united states freedom bucks to build working prototypes.

It’s sad seeing so many broken products in the world while companies refuse to hire you. Here’s a sample of broken things one of those eight figure comp employees (or ten figure comp executives) should have fixed already just from looking at my immediate surroundings (which seems fairly one-sided):

  • iTunes (Apple Music macOS App?) still has a broken UI. Current bugs include: when you “favorite” a song, it often just stops playing all music. When you tap the “back” button in album navigation, it doesn’t do anything the first click, but the second click sends you back two pages. You can’t go back one page at a time, you can only click once for no action then click again to go back two pages. Do the people creating these things not use them?
  • Still Apple Music App related: the whole information management design architecture around library organization is stuck in 2001. There are so many more useful and creative and impactful ways we could empower users to discover, remix, share, and experience their music libraries besides “one spreadsheet view with tiny icons in rows.” Did Apple run out of ideas and ambition and a drive for creating a better future?
  • Apple’s entire Feedback system didn’t get any better after they renamed it Feedback. Apple (market cap: $3.5 trillion currently) treats customers like unpaid interns expecting customers to just give them free troubleshooting work with no cooperative internal feedback on reported issues for months or years. Who is managing this? Why has the reporting process been bad for decades? We can fix these things.
  • Apple’s macOS Control Center app widget constantly uses 1% to 2% CPU (across millions of machines) because it redraws itself 1,000 times per second even though it is always off-screen and completely hidden from the user. This has been known and reported for about two years now, but nobody cares to fix it. Is this one of those things where the developers care, but project management doesn’t understand the problem, so it can never be “prioritized” as a fix?
  • Why did Apple go with such a weak Vision Pro launch of release-then-abandon? They could have made a dozen more first party world changing productivity experiences instead of expecting independent app developers to just show up and spend millions of dollars developing for a maximum risk minimum reward platform. Really curious how we ended up with “just give them a computer hat with no mind-blowing world-changing app advances” instead of the magic we usually expect from Apple launches? Though, the last iPhone update’s main selling point was basically “we converted a toggle switch to a button” so who even knows anymore? Here’s some free ideas:
    • Really show off the dynamic ability for space occupation for massive data exploration.
      • provide a default professional-level stock market application where people can create a 30 window view of 30 live symbols for real time decisions all positioned in living space
      • provide a default data exploration application to show off displaying 10 to 50 metrics series charts at once (server metrics, business metrics, anything with a chart; think “Keynote for Time-Series Data”)
      • spatial AI LLM visualizations capable of showing diverging language output trees for alternate completion paths all around you
      • spatial geometry understanding environments for beginner, intermediate, advanced stages — so much of math is just figuring out what it looks like in your own head (remember the joke about Windows ships with Minesweeper while Macs ship with Chess? or Macs have a Graphing Calculator by default? Continue bringing advanced educational tools back into the mac ecosystem by default).
      • spatial explorations of high-dimensional data with zooming in/out/up/down multiple dimensions of spaces to explore geometric surroundings for more data interpretation capabilities
    • Wii-Fit-like environments for active mobility engagement
    • spatial theremin
    • spatial + AI cooking with ingredient identification and in-view instructions
    • headshare: just outright swap cameras with somebody else (or maybe as a spatial window showing other people’s live camera feed(s))
    • remember the face swap app? do live head swaps. fun for the whole family.
    • live streamed spatial concert experiences
    • live spatial animal species detector
    • spatial physics / cosmology simulations
    • spatial where’s waldo object detector (“where are my keys?” then using AI/machine vision it locates key-shaped objects for you)
    • spatial knowledge graph navigation (think: view wikipedia, but every link is exploded into a contextual spatial window for easy jumping between)
    • spatial Apple Intelligence®™¢£ assistant in 3d space where the agent model also draws the assistant’s puppet/creature visual state so it is, effectively, alive
    • and so much more!
  • It’s interesting how Mac Safari uses touch id to “protect” private browsing mode windows, but there’s still half a dozen bugs encountered through basic Safari usage. If I can hit so many bugs constantly, how is an entire company of people using the browser not finding and fixing these bugs every single day?
  • Apple’s iOS performance degradation. Typing on my iPhone 15 Pro Max is slower than typing on my iPhone 13 Pro Max. Why? I don’t know. The keyboard misses taps all the time where my previous 3-year-older phone feels about twice as responsive when typing. What did they break and why don’t they care?
  • Why does GitHub continue to make some features worse worse? They changed the home feed from useful details to less useful details and now instead of instantly loading, it shows you a 3-9 second loading spinner saying “One moment please…” — multi-second loading spinners on every page refresh is a really damaged user experience.
  • Another GitHub trick: look at a repo page logged in vs. logged out. They are using different page templates for logged in vs logged out users and it just looks like a mistake. GitHub has been fairly silent when it comes to addressing any user experience regressions over the past couple years.
  • Obviously there are dozens of high-impact, high-reward, low-timeline projects to be created or improved in modern AI Lab companies, but “AI Companies” seem to only want to hire 28 year olds with a PhD from Stanford who have worked at Google for 5 years. No real way to get a foot in the door unless you’re related to somebody there already from what I can tell. Sure, they say “you don’t have enough experience,” but what does “experience” mean when half the work they do is making completely new things nobody has ever made before anyway?
  • There’s tons of work to be explored in practical consumer robotics applications using cost-optimized hardware strategies. I’m pretty sure my plans for a sub-$3,000 human-sized-hexapod would create a few hundred billion dollars in value if we could build it out (add +$100 trillion in value if we attach the robot AI brain to it too).
  • Various social aspects but less product oriented. How do we stop the world from collapsing? How do we fix an economy where 400 million people who aren’t already in California all want to move to California and become buzzword-driven middle managers at Google? Gotta OKR that KPI so your LTV for the TAM doesn’t go into the red.
  • What is the balance between companies spending half a trillion dollars on stock buybacks when they also aren’t consistently hiring useful people into positions of authority to improve the world?
  • People talk about American politicians never retiring, but what about tech executives too? We’re ending up with a lot of executives with 20 to 40 years at these trillion dollar companies and they aren’t moving out to give more people opportunities for advancement.

just remember, the opposite of war isn’t peace: it’s creation.

our goal is to make something out of nothing. we need to express, to communicate, to go against the grain — here’s to the crazy ones.

create good things. create good systems. grow products. grow economies. become like the stars.

raggedy man, goodnight,

-Matt☁mattsta — 💰 fund fun funds

Bonus Shots

Adblock test (Why?)

Read the whole story
GaryBIshop
9 days ago
reply
An enjoyable rant
Share this story
Delete

From the book, "Calvin and Hobbes – Sunday Pages 1985 – 1995"

1 Comment
Bill Watterson Intro :: C&H Sunday Pages
Back To the Calvin & Hobbes Page
Back To the C&H Books Page
Back To the Main Page

Introduction

By Bill Watterson (C)2001

From the book, "Calvin and Hobbes - Sunday Pages 1985 - 1995"

It's been five years since the end of Calvin and Hobbes, the longest time I can remember in which I haven't drawn cartoons. Calvin and Hobbes was a wonderful experience, but it was an all-consuming career. When I quit the strip, I put my cartoons in boxes, and jumped into other interests. I haven't really considered the strip since, so at the invitation to do this show, I thought it might be time to look back at some of my work.

My first reaction in going through my old cartoons was some amazement at the size and weight of the pile. For most successful comic strips, ten years is just a drop in the bucket, but even that amount of time yields a huge amount of material. It's no wonder that decade seems like a blur.

Going through my old strips is sort of like looking at old photographs of myself: they're personal and familiar, yet somewhat bizarre at the same time. There are cartoons I've drawn that are the equivalent of pictures of my younger self wearing yellow pants: I know I'm responsible for that, but what on earth was I thinking? As my tastes have changed, and as I've learned more, I imagine that I would do many strips quite differently today. Not better necessarily, but certainly differently. I was twenty-eight when Calvin and Hobbes was first published, and, of course, I would make other choices now at age forty-three.

It's also sort of strange to see a record of my own learning curve. Pick up a given strip, and I see how I struggled with various writing and drawing problems, or how I finally surmounted one. I remember sometimes feeling that the strip was better written than I could actually write, and better drawn than I could actually draw. I learned a great deal over the years by trying to push the strip beyond my own abilities, and I'm very proud that Calvin and Hobbes explored and developed all the way to the end. By the final years, I see naturalness or a sense of inevitability to the drawing and writing that is very satisfying.

I'm more appreciative of this kind of grace since returning to the awkward stages of new learning curves. Of course, I'd also say the times have caught up with some of my strips. It's frankly a little discouraging to see how ordinary some of them look now. When Calvin and Hobbes first appeared, it was somewhat surprising to treat reality as subjective, and to draw a strip with multiple viewpoints, juxtaposing Calvin's vision with what others saw. I did this simply as a way to put the reader in Calvin's head and to reveal his imaginative personality. Now these juxtapositions are a visual game for many comic strips, and after all these years, I suspect readers know where this sort of joke is headed as soon as they see it. The novelty cannot be recaptured.

Novelty, however, is probably overrated anyway. The Calvin and Hobbes strips that hold up best, to my eye anyway, are the ones where the characters seem big, vivid, and full of life, and where the strip's world seems genuine and inviting. Punchlines come and go, but something in the friendship between Calvin and Hobbes seems to hold a small piece of truth. Expressing something real and honest is, for me, the joy and the importance of cartooning.

The Sunday strips were usually the cartoons I had the most fun with, and for this show I've chosen a few Sunday strips from each year that I think show off the strip's strengths.

I have fond memories of reading the Sunday comics when I was a kid. As far as I was concerned, the Sunday comics were the whole reason for newspapers to exist. On weekdays, I read only the strips I liked; but on Sundays, I read them all, and often several times. The Sunday comics were always the most fun to look at, so when I finally got the chance to draw my own comic strip, I knew I wanted to make the Sunday Calvin and Hobbes something special. It took me a little while to learn to use the larger Sunday space effectively. It requires a somewhat different pace for the humor, and, of course, a big color panel is no place to find out that you don't know how to draw the back of your character's head. The Sunday strip shows off both strengths and weaknesses.

Occasionally I would see that an idea I'd written for a Sunday strip was not as substantial as I'd hoped it would be, and I'd realize that some of the panels and dialogue weren't adding anything significant to the story. If that were the case, I'd remove everything extraneous and use the trimmed idea for a daily strip instead. I held the Sundays to a different standard: any idea for the Sunday strip had to need the extra space. I felt a Sunday strip should do something that was impossible the rest of the week.

Over the years, I learned that daily strips are better suited for certain kinds of ideas, while Sunday strips are better for others. The daily strip is quick and to the point, perfect for a simple observation, or a short exchange between characters. Daily strips are also better for long stories, where a certain suspense can be fostered by continuing the story day after day, and the reader can remember what happened previously.

Extended conversations with real back and forth dialogue, however, don't work very well in four tiny panels - the dialogue balloons crowd out the drawings and the strip loses its grace. In a Sunday strip, you can spread out, and let the characters yap a bit. This is often funny in itself, and it's a wonderful way to let the characters' personalities emerge. It also lets you explore a topic a bit more fully.

You can talk about things without reducing them to one-liners right away. And, of course, in today's minuscule comics, if an idea requires any real drawing, the Sunday strip is the only possible place for it. Likewise, any complex storytelling problem-a strip illustrating a long expanse of time, for example, or an event depicted in a succession of very tiny moments-is futile in the daily format. Calvin's fantasies generally migrated to the Sunday page for this reason.

In short, the Sunday page offered unique opportunities, and I deliberately tried to come up with ideas that could take advantage of them.

I usually wrote the Sunday strips separately from the dailies. For the daily strips, I tried to write an entire month's worth of ideas before inking any of them. This allowed a long period for editing and rewriting. I was less able to do this for the Sunday strips because the Sundays need to be drawn weeks further in advance and because the strips took so much longer to draw. If at all possible, however, I would try to keep two or three Sunday ideas ahead of the deadlines. I always wanted to reserve the option of abandoning an idea that didn't stand up to a few weeks of scrutiny.

For those who are interested in technical matters, the early strips were drawn on any cheap pad of Bristol board the local art supply store happened to stock. The paper was usually rather thin and sometimes the sheet wouldn't accept the ink consistently (bad sizing or something), which would make drawing aggravating and time consuming. Eventually I switched to heavier Strathmore Bristol board, which was much nicer. I used a 2H pencil to rough in the drawing, and then inked with a small sable brush and India ink. I did as little pencil work as possible in order to keep the inking more spontaneous, although the more elaborate panels required more preliminary drawing. For lettering, I used a Rapidograph cartridge pen. I drew the dialogue balloons and a few odds and ends with a crow quill pen. To cover up unwanted marks, I used various brands of Wite-Out, and in the early days, typewriter correction fluid. (Remember typewriters?) No doubt this stuff will eat through the paper or turn green in a few years, but as the original cartoons were intended for reproduction, not picture frames and gallery walls, I did not overly concern myself with archival issues or, for that matter, neatness. At some point along the way, however, I did ask the syndicate to send the printers a quality reproduction of the Sunday cartoon, rather than the original drawing, in order to reduce the amount of tape, registration marks, and general crunchings and manglings to which the drawings had previously been subjected.

Coloring the strips was a slow and tedious process. My syndicate gave me a printed sheet showing numbered squares of color, each a mixture of various percentages of red, yellow, and blue. Using this sheet as a guide, I taped some tracing paper over the finished cartoon, and painted watercolor approximations of the available colors in the areas I wanted. This would give me a very rough idea of what the newspaper version might look like. Then I numbered each little spot of color. As the Sunday strips became more visually complex, and as I started to use color more deliberately for effects, this process became a real chore. These days, I believe much of it can be done with a few clicks of a mouse.

Colors take on different characteristics when placed next to other colors (a neutral-seeming gray might look greenish and dark next to one color, but brownish and pale in relation to another). Because of this, I came up with one little trick for coloring the strip. I cut out each of the color squares provided by the printer, so I had a stack of colors (like paint chips), rather than a sheet. By laying out the cut squares and physically placing one color next to the others I expected to use, I could see exactly how each color behaved in that particular context. As I got better at this, I was able to choose approprial "palettes" for each strip, and create moods with color. One strip might call for contrasting, bright colors; another strip might be done with a limited group of soft, warm colors; another idea might call for a close range of grays and darks, and so on. If I made Calvin's skin a dull pink-gray to suggest dim lighting at night, I would have to find a dull yellow-gray that would suggest his hair in the same light. These challenges took an inordinate amount of time for work on deadline, but I was often quite proud of the results. A comic strip should always be fun to look at, and good use of color can contribute to that appeal More than that, color creates its own emotional impact, which can make the drawing more expressive.

The half-page Sunday format required certain guaranteed panel divisions. The strip had to be drawn in three rows of equal height, and there was one unmovable panel division within each row. This allowed editors to reduce and reconfigure the strip to suit their particular space needs. The same strip could run in several shapes by restacking the panels.

Editors commonly removed the entire top row altogether, so in essence, a third of the strip had to be wasted on "throwaway panels" that many readers would never see. The fixed panel divisions were also annoying because they limited my ability to compose the strip to best suit the idea. For example, they often forced a small panel where I needed more space for words.

Of course, a big part of cartooning is learning to work effectively within tight space constraints. Much of cartooning's power comes from its ability to do more with less: when the drawings and ideas are distilled to their essences, the result can be more beautiful and powerful for having eliminated the clutter. That said, there is a point at which simplification thwarts good storytelling. You can't condense Moby Dick into a paragraph and get the same effect. Over the years, my frustration increased and I became convinced that I could draw a better comic strip than the current newspaper format was permitting. Looking at examples of comics from the 1930s, when a Sunday strip could fill an entire page, I was amazed by the long-forgotten possibilities out there.

I took a sabbatical after resolving a long and emotionally draining fight to prevent Calvin and Hobbes from being merchandised. Looking for a way to rekindle my enthusiasm for the duration of a new contract term, I proposed a redesigned Sunday format that would permit more panel flexibility. To my surprise and delight, Universal responded with an offer to market the strip as an unbreakable half page (more space than I'd dared to ask for), despite the expected resistance of editors.

To this day, my syndicate assures me that some editors liked the new format, appreciated the difference, and were happy to run the larger strip, but I think it's fair to say that this was not the most common reaction. The syndicate had warned me to prepare for numerous cancellations of the Sunday feature, but after a few weeks of dealing with howling, purple-faced editors, the syndicate suggested that papers could reduce the strip to the size tabloid newspapers used for their smaller sheets of paper. Another strip could then run vertically down the side. Consequently, while some papers, primarily in larger markets, ran the strip as a half page, other papers reduced it. In some of the latter papers (including the one I read at the time), I actually lost ground: the new Sunday strip was printed even smaller than before. I was in no mood to take on new fights, so I focused on the bright side: I had complete freedom of design and there were virtually no cancellations.

For all the yelling and screaming by outraged editors, I remain convinced that the larger Sunday strip gave newspapers a better product and made the comics section more fun for readers. Comics are a visual medium. A strip with a lot of drawing can be exciting and add some variety. Proud as I am that I was able to draw a larger strip, I don't expect to see it happen again any time soon. In the newspaper business, space is money, and I suspect most editors would still say that the difference is not worth the cost. Sadly, the situation is a vicious circle: because there's no room for better artwork, the comics are simply drawn; because they're simply drawn, why should they have more room?

Business controversies aside, the new format opened up new ways to tell stories, and I drew different kinds of strips as a result. I could write and draw the strip exactly as I imagined it, so it truly challenged my abilities. Whereas Sunday strips had previously taken me a full day to draw and color, a complex strip would now take me well into a second day to finish. Deadlines discourage this kind of indulgence, and I had to steal that extra time from what would have been some semblance of an ordinary life, but I was thrilled to expand the strip's world.

Laying out the panels became a job in itself, now that I was no longer confined to horizontal rows. could place boxes anywhere and any size, but the reader's eye needs to flow naturally to the proper panels without confusion, and big panels need to be designed in such a way that they don't divert attention and spoil surprises. The graphic needs of each panel must be accommodated and the panels themselves should form a pleasing arrangement so the entire page is attractive, balanced, and unified as well. Here again I looked for guidance in the gorgeous Sunday pages of George Herriman's Krazy Kat.

The new Sunday format necessitated a change in the format of my book collections as well. Having won a bigger strip in newspapers, I wanted the book reproductions to reflect the strip's new impact as much as possible by printing the Sunday strips large. This resulted in the rather awkward horizontal format of my later books. They stick out of bookshelves, but the strips look nice. From this point on, the Sunday strips were reproduced in color with each collection, not just in the "treasury" collections, as before. (Here's a piece of trivia: because of the timing of the book format change, the cartoons from the Snow Goons collection were never put in a treasury book, so those Sunday strips have been reprinted only in black-and-white.)

Ten years after starting Calvin and Hobbes, I ended the strip. As much as I knew I'd miss the characters, the decision was long anticipated on my part. Professionally, I had accomplished far more than I'd ever set out to do and there were no more mountains I wanted to climb. Creatively, my interests were shifting away from cartooning toward painting, where I could develop my drawing skills further. And personally, I wanted to restore some balance to my life. I had given the strip all my time and energy for a decade (and was happy to do so), but now I was that much older and I wanted to work at a more thoughtful pace, out of the limelight, and without the pressures and restrictions of newspapers.

The final Calvin and Hobbes strip was a Sunday strip. The deadline for Sunday strips being early, I drew it well before writing the daily strips that would eventually precede it in the newspaper. I very much wanted to hit the right note for this final strip. I think it worked, but it was a bittersweet strip to draw.

Since Calvin and Hobbes, I've been teaching myself how to paint, and trying to learn something about music. I have no background in either subject, and there are certainly days when I wonder what made me trade proficiency and understanding in one field for clumsiness and ignorance in these others. On better days, I enjoy having so many new challenges and surprises. Even so, these new endeavors have only deepened my appreciation for comics. I no longer take quite so much for granted the versatility of comics and their ability to depict complex ideas in a beautiful, accessible, and entertaining form. For all their seeming simplicity, the expressive possibilities of comics rival those of any other art form. Five years after Calvin and Hobbes, I love the comics as much as ever.

Bill Watterson
Summer 2001


Back To the Calvin & Hobbes Page
Back To the C&H Books Page
Back To the Main Page

Adblock test (Why?)

Read the whole story
GaryBIshop
13 days ago
reply
Sweet.
Share this story
Delete
Next Page of Stories