381 stories
1 follower

From Burned Out Tech CEO to Amazon Warehouse Associate

1 Comment


Cultivating Resilience is a newsletter that helps innovators navigate change and bring new product and ideas into world. It's published by Jason Shen, a resilience coach, product manager, 1st gen immigrant, ex-gymnast, and 3x startup founder.

🧠 After Burning Out of Tech, He Found Relief in an Unexpected Place: Working as an Amazon Warehouse Associate

Philip Su began his career by turning down a Stanford PhD program to work at Microsoft. He spent 12 years there working as a developer and a manager, then joined a pre-IPO Facebook as the director and London site lead for eight years, reporting directly to Facebook’s CTO.

He’s been an entrepreneur in residence with Seattle VC firm Madrona, taught a popular course at the University of Washington's computer science division, and was founder and CEO of a global health software nonprofit funded by the Bill and Melinda Gates Foundation.

But after a remarkable 23 year career in tech, Philip ran into trouble. The stress of serving as the nonprofit founder and the responsibility of employing dozens of people led to his burnout. He tried to take time off, and spent 8 months unemployed and spiraling into depression.

To pull himself out, he took a new job, but not the one you expect. After a lifetime of “cushy” tech jobs, Philip went to work at an Amazon warehouse as an associate in Ship Dock, standing on his feet and sorting containers for 11 hours a day during the busiest six weeks of the year, known as “Peak”. The experience was physically taxing, he was diagnosed with tendonitis after moving hundreds of boxes a day, but it pulled him out of his depression and helped him gain perspective and a deeper sense of meaning. He produced a podcast about his experience called Peak Salvation.

Here’s Philip sharing that experience in his own words, along with a recap of lessons we can all learn.

Update: see Part 2 here


How I Burned Out and Decided to Work for Amazon

During my many years as a software engineer and manager, I struggled with stress-induced insomnia. At its worst, It’d take me anywhere between three to six hours to fall asleep after first going to bed.

In fact, this type of worry-induced insomnia was a key part of what led to me to step down from my CEO role leading the nonprofit I founded in Seattle. By the time it grew beyond twenty employees, I found many things about the mantle of leadership overwhelmingly stressful. One of my biggest concerns was how to make payroll. Though the nonprofit was funded generously by the Bill & Melinda Gates Foundation, I found it a profound burden to be the primary person responsible for raising future funding. And ultimately, when you lead a team or a company, failures are all either directly or indirectly attributable to you, especially if you have a strong personal sense of responsibility and accountability.

After stepping down in March 2021, I didn't have any plans for what to do next. I decided to take some time to decompress and to reset.

I think most people who dream of retirement think that it's going to be awesome. And it was—for about a month.  I skied on weekdays, shopped at Target at 11am with nobody there, and played video games. But after several months of pursuing various hobbies as my whims and interests—all the things which people who aspire to retire young might look upon with envy—I felt unfulfilled. I became unmoored, set adrift in a sea of theoretical possibility only to drown in unbounded optionality. Novelty and excitement turned into a spiraling vortex of depression as I began to wake up sometimes at noon, sometimes 2pm, and on the rare occasion even getting out of bed at 6pm.

For me, a lot of my meaning comes from two things. One is doing something in the world that feels like it's actually making things a little better somehow. And so contributing to society in some meaningful way.

But the other thing is socialization with my coworkers is a huge part of my daily satisfaction in a job. You might be free Monday through Friday but all of your friends are working when you want to grab coffee.

For me, feeling quite isolated in my unemployment worsened my mental health. I was going to therapy and getting medication, but I felt depressed and things seemed to be spiraling downward.

In November, I suddenly thought, "I should at least get some sort of job somewhere just to have some regularity to my schedule, to enforce some daily practices". My number one priority was just to have structured work that would force me to get up every day—work that was very different from white collar jobs, in that I did not want to be asked to make a lot of decisions everyday.

I didn't want the stress of managing people and teams. I didn't want the politics of subjective decisions being debated amongst team members. I wanted literally to be told what to do every day and I wanted that structure to be rigorous. I strongly felt that would help me get out of my depression.

Day-to-Day Life as an Amazon Warehouse Associate

After a short job application that required a disappointing uneventful drug test, I was approved for a job as an Amazon associate. I’ll admit, I was a little skeptical of all these breathless exposes about the horrible working conditions at Amazon. Because ultimately people are electing to work there despite the job market having far more openings than there are people to fill the jobs. So I went in wanting to see how bad the job was.

As I worked there, I saw how grueling and in some ways quite unreasonable the job is. I'm only in my mid forties and I found the job exhausting. I would see 60 some year olds on the floor lifting these boxes and I would end up helping them because I felt so bad.

Amazon knows the job is hard. When you join as the warehouse associates, the swag they give you on the first day is a gallon-sized Ziploc bag with a heat and ice pack, one tear-away package of sports drink mix, one cloth COVID mask with an Amazon logo on it, and some kind of pain reliever medication. What do those four items tell you? “Get ready for pain”.

I worked in Ship Dock at BFI4, which is Amazon's flagship facility in south Seattle in an area called Kent. It's a facility that houses about 3,000 associates and the building never shuts down. So it works two shifts—a day shift and a night shift. During peak season (mid November to December 26), each shift works 11-11.5 hours, right up to the edge of the state mandated maximum number of hours (12 hours).

In the morning you're often busy trying to just find a parking space—the lot fits hundreds of cars—and clock in before the official 6:30 AM start time.

There's a five minute grace period, but at 6:36 AM, they dock an hour's pay. That's called unpaid time. And if you ever accrue more than 10 hours or the equivalent of one shift of unpaid time, you get fired. I made $18.55 an hour plus healthcare starting day one.

It's a huge warehouse. There’s the clacking of machines, the smell of machine oil, and concrete flooring everywhere. You enter through the front gates through security, through these huge metal bars.  You scan your card at the time card machine, and then your team has a morning standup.

The manager tells you what's special about the day, how many packages you're moving, and then you get your job assignments via this whiteboard. You find out where you're assigned to, and then you go to that spur, the place where packages come out.

There are a total of about 32 spurs. Your job is to scan each package with a hand scanner and sort it into one of maybe 8 to 10 carts based on destination. And you're expected to lift about 180 to 200 packages per hour. All the box lifting created sort of a tension in my hands and eventually numbness and tingling every morning when I woke up. Towards the end, I saw a doctor for the pain and was diagnosed with tendonitis.

You can pass your whole day just lifting like that with a mandatory unpaid lunch break for 30 minutes. And then legislated 15 minute breaks in the morning and afternoon. When it's six o'clock, there's a massive pile of people waiting to scan their card key to exit the building through turnstiles.

You can stay in the parking lot in a traffic line for up to 20 minutes waiting to get out of the parking lot because probably 800, 900 employees are trying to leave at that time, and an additional eight or 900 are trying to come in. Eventually you get home, eat dinner, do whatever you need to do to manage your life, and repeat the cycle all over again.

Amazon can call you into a full day of overtime with as little as 18 hours’ notice. During my entire time at Amazon, I’d always need to check my latest schedule in Amazon A-to-Z some time after noon on the day before any expected day off because Amazon could have changed it to a required overtime day instead.

Evolving My Relationship With Work

I had what I would stereotype as a traditional Chinese upbringing in America, which meant my parents very much expected straight A's. Anytime a B happened, something had gone wrong. The explanation was never like you lack the talent or whatever, but that you didn’t work hard enough.

I honestly found school to be a lot easier than some of my peers and that reinforced this pleasure in succeeding. The moment I graduated and started working full-time at Microsoft, I immediately missed the sense of how I was doing. Who's going to tell me if I’m getting an A or a B?

I quickly gravitated at work toward asking, “What does my review say? What is my level of progression in this company? Am I moving ahead?” I had gotten so used to this idea that quantifiable measurements by people outside of me would tell me how I was doing. I needed to be told all the time that I was doing well.

That created, I think, a good decade and a half of work-ism in me, that focused on career acceleration as the number one most important thing. I had a sleeping bag in my office at Microsoft and I used it often. I had an alarm that would wake me up at 6:00AM or 9:00 AM and I would just get working again.

I remember getting promoted to some senior level as an engineer at Microsoft and saying to my manager: ”Great, let’s talk about what I need to do to get to the next level”. He was a bit taken aback but gently suggested that I just try to enjoy the new position for a bit. Only in retrospect did it dawn on me that he was much older than me yet probably at the same level I was so eager to advance beyond.

When I was leading the London office for Facebook, I was going to evening events, recruiting people, and I was giving tech talks in the evening. I was working all day and one day, my then seven year old son came into the room. We had bought a little Norse chess set and he asked if we could play sometime. I said, sure thing, no problem. And as a seven year old, he said "Can you put it on your calendar?"

That was such a moment of a mirror being held up to me of “Wow, something has gone wrong”. When I say to my son we'll play chess, his first response is to be skeptical and to say, "can you put it on your calendar?" That really tells you where your priorities are, right? And so that moment for me was very sobering.

I feel like I’ve been bimodal with work. I'm either working way too much time and not spending enough time with the family, or I've gone completely to zero, where dad is at home all day, typing on his computer.

Over time, I've been really trying to be more carefully balanced. When my kids asked me to do things with them, now I usually try to drop whatever it is I'm doing, because they only have so many years here with me in the house.


Thank you for being a member of Cultivating Resilience. This newsletter has spread almost exclusively by word of mouth. Would you help share it with a friend or two who might also enjoy it?

Recent Issues

119: My 12 Hour Walk

💡Cultivating Resilience is a newsletter that helps innovators navigate change and bring new product and ideas into world. It’s published by Jason Shen, a resilience coach, product manager, 1st gen immigrant, ex-gymnast, and 3x startup founder. 🧠 Brb, going on a 12 hour walk Why would anyone cho…


118: Creative Output

🧠 Enhancing Your Creative Output🖼 Unscratchable (Scotch & Bean)👉 A Different Aftermath (AI Comic)


117: Resilience Archetypes

🤔 How do you approach change?🧠 Saying No to Say Yes🖼 Deadlines (Scotch & Bean)👉 Serena’s Final Fight


More Resources and Fun Stuff

  • Book Notes: Summaries / quotes from great books I've read
  • Scotch & Bean: a webcomic about work, friendship, and wellness
  • Birthday Lessons: Ideas, questions, and principles I've picked up over the years
  • Career Spotlight: A deep dive into my journey as an athlete, PM, founder, and creator.

Work with Me

Working with an executive coach can help you take on bigger and bolder opportunities in a volatile environment—without destroying your sense of self. I currently have 1-2 open slots for new clients so if that sounds interesting, take a look.

Executive Coaching with Jason

I partner with entrepreneurial leaders to beat burnout, navigate change, and ship work that matters.

Learn More

Adblock test (Why?)

Read the whole story
1 day ago
good read
Share this story

Chrome’s new ad-blocker-limiting extension platform will launch in 2023

1 Comment
Chrome’s new ad-blocker-limiting extension platform will launch in 2023

Enlarge (credit: Isaac Bowen / Flickr)

Google's journey toward Chrome's "Manifest V3" has been happening for four years now, and if the company's new timeline holds up, we'll all be forced to switch to it in year 5. "Manifest V3" is the rather unintuitive name for the next version of Chrome's extension platform. The update is controversial because it makes ad blockers less effective under the guise of protecting privacy and security, and Google just so happens to be the world's largest advertising company.

Google's latest blog post details the new timeline for the transition to Manifest V3, which involves ending support for older extensions running on Manifest V2 and forcing everyone onto the new platform. Starting in January 2023 with Chrome version 112, Google "may run experiments to turn off support for Manifest V2 extensions in Canary, Dev, and Beta channels." Starting in June 2023 and Chrome 115, Google "may run experiments to turn off support for Manifest V2 extensions in all channels, including stable channel." Also starting in June, the Chrome Web Store will stop accepting Manifest V2 extensions, and they'll be hidden from view. In January 2024, Manifest V2 extensions will be removed from the store entirely.

Google says Manifest V3 is "one of the most significant shifts in the extensions platform since it launched a decade ago." The company claims that the more limited platform is meant to bring "enhancements in security, privacy, and performance." Privacy groups like the Electronic Frontier Foundation (EFF) dispute this description and say that if Google really cared about the security of the extension store, it could just police the store more actively using actual humans instead of limiting the capabilities of all extensions.

Read 5 remaining paragraphs | Comments

Read the whole story
5 days ago
Maybe an adblocking dns is the answer?
5 days ago
Either that or stop using Chrome. I'm loving Brave at the moment.
Share this story

This Found-Sound Organ Was Made with Python and a Laser Cutter

1 Comment

Certain among our readership will no doubt remember attaching a playing card to the front fork of one’s bicycle so that the spokes flapped the card as the wheel rotated. It was supposed to sound like a motorcycle, which it didn’t, but it was good, clean fun with the bonus of making us even more annoying to the neighborhood retirees than the normal baseline, which was already pretty high.

[Garett Morrison]’s “Click Wheel Organ” works on much the same principle as a card in the spokes, only with far more wheels, and with much more musicality. The organ consists of a separate toothed wheel for each note, all turning on a common shaft. Each wheel is laser-cut from thin plywood, with a series of fine teeth on its outer circumference. The number of teeth, as calculated by a Python script, determines the pitch of the sound made when a thin reed is pressed against the spinning wheel. Since the ratio of teeth between the wheels is fixed, all the notes stay in tune relative to each other, as long as the speed of the wheels stays constant.

The proof-of-concept in the video below shows that speed control isn’t quite there yet — playing multiple notes at the same time seems to increase drag enough to slow the wheels down and lower the pitch for all the notes. There appears to be a photointerrupter on the wheel shaft to monitor speed, so we’d imagine a PID loop to control motor speed might help. That and a bigger motor that won’t bog down as easily. As for the sound, we’ll just say that it certainly is unique — and, that it seems like something [Nicolas Bras] would really dig.

Read the whole story
9 days ago
Ha! Crazy project.
Share this story

Avoiding homework with code and getting caught

1 Comment

Back in of 2020, my school used a few online learning platforms that allowed professors/teachers to assign homework to students. I, as a lazy developer, wanted to spend more time playing games and writing code, especially when everyone was spending their time at home because of lockdown. I started writing this post in January of 2022, but I put off publicizing it for a while. It has been long enough since this all happened, so please sit back and enjoy.

The back story

Let's set the scene. 2018, my school introduces a new online homework platform for students. It's called HegartyMaths and it does a lot. It's fairly simple, teachers choose a topic to set for us as homework, with that we get a 10-15 minute tutorial/informational video on the subject (of which we have to write down notes whilst watching) and a shortish quiz to complete after finishing the video. It's a lot of work, especially the quiz, and in the worst cases can take up to an hour to complete one topic (bad).

Mostly, software engineers are rather lazy individuals. We tell metal how to do stuff for us. Homework then, naturally, is an arduous task for a developer who is still at school. So, still 2018, myself and a close friend of mine by the name of Scott Hiett decided to do something about the Hegarty situation. We started to reverse engineer the frontend app and eventually came up with a Tampermonkey userscript that would glitch the embedded YouTube player to say that we've watched the video at least 1x. Cruically, our teachers could see how many times we've watched the video, so being able to skip up to 20 minutes of homework time was especially useful – and it was a lot of fun to build too.

So we flexed it on our Snapchat stories and had our school friends message us to use it blah blah. We eventually figured out that we could also set it to be watched over 9999x times; every time we did that our accounts were reset by the Hegarty team.

The first email

After this, we got in contact with our Math teacher in November of 2018 and got her to send an email to HegartyMaths informing them of our petty exploit and they got back to us very quickly. I don't have the original email anymore but I distinctly remember it saying something along the lines of "Stop trying to hack our platform and get back to doing your homework." Edit: While writing this, I was able to uncover the deleted email from a photo we had taken of it in 2020. See below (certain details redacted for obvious reasons):

Hegarty Time Exploit Email

This response excited us a bit, as they were now aware of us messing around with the site and they had no intention of fixing the minor vuln we had anyway, so we kept using it. We had tried to build a script to answer the questions for us, but it was too much work at the time (complex data structures, weird API responses, etc etc).


For a while, students had access to another platform called Educake. Similar to HegartyMaths but targeting Biology, Chemistry and Physics. There was no video to watch at the beginning. We'd used it for a few years, in fact since I joined the school, but I'd never thought about reversing until all of this began.

One common factor between Hegarty and Educake is that immediately give you the correct answer if you got a question wrong. We took advantage of this and wrote a small node/mongo app & tampermonkey script to detect when a user was on a quiz page, and answer every question with a random number and then store the correct answer in mongodb. I don't have the original source but the TamperMonkey script was probably something like the following:

const guess = Math.random();

const result = await post('/api/answer', {
	body: {
		answer: guess,

await post('http://localhost:8080/save', {
	body: {
		question_id: question.id,
		answer: result.success ? guess : result.correct_answer,

// Go to next question and repeat code above

As you can see, it was quite literally a loop through every question, saving the correct answer as we got it and moving on. Eventually I added a few more features to fetch from the database if we already had the right answer (meaning we don't answer Math.random every time) and also I added in support for multiple choice (so that we actually pick one of the possible answers rather than making it up – however I was surprised that the Educake backend would allow an answer that wasn't even in the possible choices).

Now working on the project solo, I decided it would be time to build a nice UI for it all and bundle it all into a simple Tampermonkey script for both flexing rights on Snapchat (people constantly begging me to be able to use it was certainly ego fuel I hadn't experience before) and also for myself to get out of homework I didn't want to do.

The end result? A ~200 line codebase that scooped up all questions and answered on the site and could repeatedly get 100% on every single assignment and a 15mb mongo database.

Below is a small video of what it all looked like. It also demonstrates a feature I added allowing for a "target percentage" — meaning users could get something other than 100% to look like more real/human score. Video was recorded on my Snapchat in November 2019.

Hegarty 2

The success of this script along with pressure from my peers led me to gain a lot of motivation to start working on reversing Hegarty again. I reached out to an internet friend who, for the sake of his privacy, will be named as "Jake." He also used HegartyMaths at his school and was in the same boat as me trying to not do our homework. Together, we managed to figure out how to answer many varying types of questions, including multiple choice and ordered answers resulting in a huge amount of data stored. We had sacrificial user accounts and managed to answer 60,000 questions in a couple minutes, rocketing our way to the top of the HegartyMaths global leaderboard. Would like to give a special shoutout to Boon for lending us his login and letting us decimate his statistics.

Together, Jake and I scraped the entirety of Hegarty's database and now had a JSON file that could be argued to be worth as much as Hegarty the company itself due to the entire product quite literally being the database we had copied.

With this file, I wanted to take it a step further and allow my friends and other people to make good use of it without directly giving out the database (irresponsible)... And here Mochip was coined.


So, where does Mochip tie in to this? Mochip was a chrome extension, a collection of both our scraped Hegarty and scraped Educake databases sat behind a TypeScript API and a small React app. Hosted on Heroku free tier and MongoDB Atlas free tier, users could login, enter a question (from either site) and get back a list of answers Mochip has for that question. Here's what the landing page looked like

Screenshot of Mochip's main dashboard page

In the screenshot we can see a few stats on the right like total estimated time saved and how long you've had your account for. We gamified it a little just to keep people engaged

Our chrome extension was made for Educake as they disabled copying question text with the clipboard. We re-enabled that just by clicking a button that was injected into the UI. The chrome extension is no longer on the chrome web store, but we've found that mirrors still have listings that we can't get taken down: extpose.com/ext/195388

Our userbase grew so big that we ended up with a Discord server and even our own listing on Urban dictionary — I'm yet to find out who made it! urbandictionary.com/define.php?term=mochip

Eventually we "rebranded" as I wanted to disassociate my name from the project. Unfortunately I do not have any screenshots from this era to show. I made an alt discord account and a few announcements saying we'd "passed on ownership" however this ineveitably only lasted for a couple weeks before we were rumbled.

Crashing down

All good things must come to and end, and Mochip's did after Scott posted about Mochip on his reddit account. Like any good CEO, Colin searches his company every now and then on Google to see what people are saying or doing and unfortunately came across our reddit post. He signed up (although under a different email) and tested out the app and was shocked to see it working. Shortly after this I received an email from Colin directly. See below

Email from Colin

I was upset but also a little content — it was sort of validation that I'd successfully made it and that catching the attention of Colin himself was sort of a good thing. We quickly scheduled a Google Meet, also inviting Scott, and I had one of the most memorable conversations of my life. I am extremely grateful for the advice Colin gave us in the call.

Screenshot of Google Meet

I'd like to give a special thank you to the legendary Colin Hegarty for his kindness and consideration when reaching out to me. Things could have gone a lot worse for me had this not been the case. HegartyMaths is a brilliant learning resource and at the end of the day, it's there to help students learn rather than an inconvenience.

Shortly after, Colin reached out to the Educake team who we also scheduled a call with. We explained our complete methodology and suggested ways to prevent this in the future. The easiest fix from our point of view would be to implement an easy rate limit with Redis that would make it wildy infeasible to automate a test. The other thing we suggested was to scramble IDs in the database to invalidate our cloned database as much as possible (e.g. we only had the Hegarty IDs, so we could no longer reverse lookup a question).

My email replying to Colin

Thank you for reading, truly. Mochip was a real passion project and I had a wild time building it. ⭐

Adblock test (Why?)

Read the whole story
12 days ago
Cool story.
Share this story

Beyond Meat COO reportedly attempts to consume human nose

1 Comment and 2 Shares
Beyond Meat, plant-based Jerky display, Walgreens, Queens, New York
Photo by: Lindsey Nicholson / UCG / Universal Images Group via Getty Images

Memo to Beyond Meat COO Doug Ramsey: human noses aren’t vegan.

Ramsey was arrested in a road rage incident at a parking garage in Fayetteville, Arkansas. According to local news outlet KNWA, which cited law enforcement, the incident started when “a Subaru ‘inched his way’ in front of Ramsey’s Bronco, making contact with the front passenger’s side tire.”

Ramsey came out swinging, and the Subaru’s owner said Ramsey “pulled him in close and started punching his body” before he “bit the owner’s nose, ripping the flesh on the tip of the nose,” according to KNWA. Reportedly, Ramsey also threatened to kill the other driver.

Maybe Ramsey was on edge because Beyond Meat shares have fallen 92 percent from their peak closing value of $234.90 in...

Continue reading…

Read the whole story
15 days ago
Share this story

I’m a productive programmer with a memory of a fruit fly

1 Comment

A love letter to tools that changed everything for me.

Programming Over the Years

Programming got vastly more varied compared to when I started dabbling in AmigaBASIC in the mid-1990s. Back then you could buy one very big book about the computer you’re programming and were 99% there. That book, full with earmarks and Post-its, lay next to you while hacking into your monochrome editor, always in reach.

Nowadays it can happen that the book on your frontend web framework is thicker than what a C64 programmer needed to write a complete game. On the other hand, the information for everything that we need to write code today is usually no more than one click away.

Nobody can imagine to pay for developer documentation – both Microsoft and Apple offer their documentation on the web for free for everyone. And don’t get me even started about open-source projects!

In times of npm, PyPI, and GitHub, it’s hard to explain that requiring anything beyond what your operating system offers, used to be controversial decision that had to be weighted judiciously. Often, you shipped your dependencies along with your product.

The new availability is great and variety is healthy, but it leads to fragmentation of the information that you need to be productive.

People have dozens of tabs open with API docs for the packages they’re using at the moment. As someone who has worked from rural South Africa, I can tell you that online-only docs aren’t only a problem when when your Internet dies altogether. Especially if you need to use the search function of the site.

If you’re like me, and are a polyglot working with multiple programming languages with enormous sub-communities each (even within Python, Flask + SQLAlchemy + Postgres is a very different beast from writing asyncio-based network servers), it hurts my head to even imagine one could remember the arguments of every method I’m using. Primarily, if you’re really like me and barely remember your own phone number.

That’s why it was such a life-changing event for me when I found Dash in 2012.

API Documentation Browsers

Your mind is for having ideas, not holding them.

David Allen
Dash searching

Dash searching

Dash gives me the superpower of having all relevant APIs one key press away:

  • I press ⌥Space and a floating window pops up with an activated search bar,
  • I start typing the rough name of the API or topic,
  • I choose from the suggestions and land at the symbol within the official project documentation,
  • I press Escape, the floating window disappears and I can start typing code immediately because my editor is in focus again.
  • If I forget what I just read, I press ⌥Space again and the window pops up at the same position.

All of this is blazing fast – I want to make such a round trip under 2 seconds. The forgotten bliss of native applications – yes I know about ‌https://devdocs.io.

Having all API docs one key press away is profoundly empowering.

The less energy I spend trying to remember the argument of a function or the import path of a class, the more energy I can spend on thinking about the problem I’m solving.

I don’t consider myself particularly smart, so I take any opportunity to lower my mental load.

While Dash is a $30 Mac app, there’s the free Windows and Linux version called Zeal, and a $20 Windows app called Velocity. Of course there’s also at least one Emacs package doing the same thing: helm-dash.

Meaning: you can have this API bliss on any platform! In the following I’ll only write about Dash, because that’s what I’m using, but unless noted otherwise, it applies to all of them.

I don’t know why, but there’s also countless plugins for applications like Alfred or VS Code. I just like to press the same key combination no matter where I am and can drill down into the docs. This is why I don’t find the many web API browser interesting at all – I want to be in and out of the docs within 5 seconds.

The one thing they have in common is the format of the local documentation.

Documentation Sets

They all use Apple’s Documentation Set Bundles (docsets) that are directories with the HTML documentation, metadata in an XML-based property list and a search index in a SQLite database:

└── Contents
    ├── Info.plist # ← metadata
    └── Resources
        ├── Documents # ← root dir of HTML docs
        │   └── index.html
        └── docSet.dsidx  # ← SQLite db w/ search index

If you have a bunch of HTML files on your disk, you can convert it into a docset that can be consumed by Dash. It’s just HTML files with metadata. And since it’s HTML files on your disk, all this works offline.

Therefore, docsets can replace documentation that you already keep locally on your computer for faster and/or offline access without doing anything special. Just package it up into the necessary directory structure, add an empty index, and fill out simple metadata.

Shazam! Now you can conjure them with a single keypress and get rid of them with another.

Let’s circle back to the boring history lesson from the beginning: there’s a myriad of projects that I use across countless platforms – every day. And I’m not talking just about programming APIs here: Ansible roles, CSS classes, HAProxy configuration, Postgres (and SQL!) peculiarities…it’s a lot.

Installed Dash docsets

Installed Dash docsets

And while Python and Go core documentation ship with Dash, and while Godoc documentation can be added directly by URL, no matter how hard Dash will try: in the fragmented world of modern software development, it will never be able to deliver everything I need.


The biggest notable gap for me is Sphinx-based docs that dominate (not only) the Python ecosystem.

Sphinx is a language-agnostic framework to write documentation. Not just API docs or just narrative docs: all of it, with rich interlinking. It used to be infamous for forcing reStructuredText on its users, but nowadays more and more projects use the wonderful MyST package to do it in Markdown. If you have any preconceptions about the look of Sphinx documentation, I urge you to visit the Sphinx Themes Gallery and see how pretty your docs can be. It’s written in Python, but it’s used widely, including Apple’s Swift, the LLVM (Clang!) project, or wildly popular PHP projects.

And it offers the exact missing piece: an index for API entries, sections, glossary terms, configuration options, command line arguments, and more – all distributed throughout your documentation any way you like, but always mutually linkable. I find this wonderful particularly if you follow a systematic framework like ‌Diátaxis.

The key component that makes this possible is technically just an extension: intersphinx. Originally made for inter-project linking – hence the name – it offers a machine-readable index for us to take. That index grew so popular that it’s now supported by the MkDocs extension mkdocstrings and pydoctor. You can recognize intersphinx-compatible documentation exactly by that index file: objects.inv.

And that’s why, 10 years ago almost to the day, I’ve started the doc2dash project.


doc2dash is a command line tool that you can get from my Homebrew tap, download one of the pre-built binaries for Linux, macOS, and Windows from its release page, or install from PyPI.

Then, all you have to do is to point it at a directory with intersphinx-compatible documentation and it will do everything necessary to give you a docset.

doc2dash converting

doc2dash converting

Please note that the name is doc2dash and not sphinx2dash. It was always meant as a framework for writing high-quality converters, the first ones being Sphinx and pydoctor. That hope sadly didn’t work out, because – understandably – every community wanted to use their own language and tools.

Those tools usually look quite one-off to me though, so I’d like to re-emphasize that I would love to work with others adding support for other documentation formats. Don’t reinvent the wheel, the framework is all there! It’s just a bunch of lines of code! You don’t even have to share your parser with me and the world.

The fact that both Dash and doc2dash have existed well over a decade and I still see friends have a bazillion tabs with API docs open has been positively heartbreaking for me. I keep showing people Dash in action and they keep saying it’s cool, and put it on their someday list. Barring another nudge, someday never comes.

While the fruit-fly part of this article ends here, let me try to give you that nudge with a step-by-step tutorial such that today becomes someday!

Tutorial: Convert and Submit Your Docs

The goal of this tutorial is to teach you how to convert intersphinx-compatible documentation to a docset and how to submit it to Dash’s user-generated docset registry, such that others don’t have to duplicate your work.

I’ll assume you have picked and installed your API browser of choice. It doesn’t matter which one you use, but the tutorial uses Dash. For optionally submitting the docset at the end, you’ll also need a basic understanding of GitHub and its pull request workflow.

I will be using this tutorial as an occasion to finally start publishing docsets of my own projects, starting with structlog. I suggest you pick an intersphinx-compatible project that isn’t supported by Dash yet and whose documentation’s tab you visit most often.

Let’s do this!

Getting doc2dash

If you’re already using Homebrew, the easiest way to get doc2dash is to use my tap:

$ brew install hynek/tap/doc2dash

There are pre-built bottles for Linux x86-64 and macOS on both x86-64 and Apple silicon, so the installation should be very fast.

Unless you know your way around Python packaging, the next best way are pre-built binaries from the release page. Currently it offers binaries for Linux, Windows, and macOS – all on x86-64. I hope to offer more in the future if this proves to be popular.

Finally, you can get it from PyPI. I strongly recommend to use pipx and the easiest way to run doc2dash with it is:

$ pipx run doc2dash --help

Building Documentation

Next comes the biggest problem – and source of frequent feature requests for doc2dash: you need the documentation in a complete, built form. Usually that means that you have to download the repository and figure out how to build the docs before even installing doc2dash because most documentation sites unfortunately don’t offer a download of the whole thing.

My personal heuristic is to look for a tox.ini or noxfile.py first and see if it builds the documentation. If it doesn’t, I look for a readthedocs.yml, and if even that lets me down, I’m on the lookout for files named like docs-requirements.txt or optional installation targets like docs. My final hope is going through pages of YAML and inspecting CI configurations.

Once you’ve managed to install all dependencies, it’s usually just a matter of make html in the documentation directory.

After figuring this out, you should have a directory called _build/html for Sphinx or site for MkDocs.

Please note with MkDocs that if the project doesn’t use the mkdocstrings extension – which alas, right now is virtually all of the popular ones – there won’t be an objects.inv file and therefore no API data to be consumed.

I truly hope that more MkDocs-based projects add support for mkdocstrings in the future! As with Sphinx, it’s language-agnostic.


Following the hardest step comes the easiest one: converting the documentation we’ve just built into a docset.

All you have to do is pointing doc2dash at the directory with HTML documentation and wait:

$ doc2dash _build/html

That’s all!

doc2dash knows how to extract the name from the intersphinx index and uses it by default (you can override it with --name). You should be able to add this docset to an API browser of your choice and everything should work.

If you pass --add-to-dash or -a, the final docset is automatically added to Dash when it’s done. If you pass --add-to-global or -A, it moves the finished docset to a global directory (something like ~/Library/Application Support/doc2dash/DocSets) and adds it from there. It’s rare that I run doc2dash without -A when creating docsets for myself.

Improving Your Documentation Set

Dash’s documentation has a bunch of recommendations of how you can improve the docset that we built in the previous step. It’s important to note that the next five steps are strictly optional and more often than not, I skip them, because I’m lazy.

But in this case, I want to submit the docset to Dash’s user-contributed registry, so let’s go the full distance!

Set a Main Page

With Dash, you can always search all installed docsets, but sometimes you want to limit the scope of search. For example when I type p3: (the colon is significant), switches to only searching Python 3 docset. Before you start typing, it offers you a menu underneath the search box whose first item is “Main Page”.

When converting structlog docs, this main page is the index which can be useful, but usually not what I want. When I got to the main page, I want to browse the narrative documentation.

The doc2dash option to set the main page is --index-page or -I and takes the file name of the page you want to use, relative to the documentation root.

Confusingly, the file name of the index is genindex.html and the file name of the main page is the HTML-typical index.html. Therefore, we’ll add --index-page index.html to the command line.

Add an Icon

Documentation sets can have icons that are shown throughout Dash next to the docsets’s name and symbols. That’s pretty, but also helpful to recognize docsets faster and if you’re searching across multiple docsets, where a symbol is coming from.

structlog has a cute beaver logo, so let’s use ImageMagick to resize the logo to 16x16 pixels:

$ magick \
    docs/_static/structlog_logo_transparent.png \
    -resize 16x16 \

Now we can add it to the docset using the --icon docset-icon.png option.

Support Online Redirection

Offline docs are awesome, but sometimes it can be useful to jump to the online version of the documentation page you’re reading right now. A common reason is to peruse a newer or older version.

Dash has the menu item “Open Online Page ⇧⌘B” for that, but it needs to know the base URL of the documentation. You can set that using --online-redirect-url or -u.

For Python packages on Read the Docs you can pick between the stable (last VCS tag) or latest (current main branch).

I think latest makes more sense, if you leave the comfort of offline documentation, thus I’ll add:

--online-redirect-url https://www.structlog.org/en/latest/

Putting It All Together

We’re done! Let’s run the whole command line and see how it looks in Dash:

$ doc2dash \
    --index-page index.html \
    --icon docs/_static/docset-icon.png \
    --online-redirect-url https://www.structlog.org/en/latest/ \
Converting intersphinx docs from '/Users/hynek/FOSS/structlog/docs/_build/html' to 'structlog.docset'.
Parsing documentation...
Added 238 index entries.
Patching for TOCs... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00


structlog’s Main Page

structlog’s Main Page

Notice the icon in the search bar and pressing ⇧⌘B on any page with any anchor takes me to the same place in the latest version of the online docs.


Since I want to create a new version of the docsets for every new release, the creation needs to be automated. structlog is already using GitHub Actions as CI, so it makes sense to use it for building the docset too.

For local testing I’ll take advantage of doc2dash being a Python project and use a tox environment that reuses the dependencies that I use when testing documentation itself.

The environment installs structlog[docs] – i.e. the package with optional docs dependencies, plus doc2dash. Then it runs commands in order:

extras = docs
deps = doc2dash
allowlist_externals =
commands =
    rm -rf structlog.docset docs/_build
    sphinx-build -n -T -W -b html -d {envtmpdir}/doctrees docs docs/_build/html
    doc2dash --index-page index.html --icon docs/_static/docset-icon.png --online-redirect-url https://www.structlog.org/en/latest/ docs/_build/html
    cp docs/_static/docset-icon@2x.png structlog.docset/icon@2x.png
    tar --exclude='.DS_Store' -cvzf structlog.tgz structlog.docset

Now I can build a docset just by calling tox -e docset. Until doc2dash supports hi-res icons, it also copies a 32x32 pixels big version of the logo directly into the docset.

Doing that in CI is trivial, but entails tons of boilerplate, so I’ll just link to the workflow. Note the upload-artifact action at the end that allows me to download the built docsets from the run summaries.

At this point we have a great docset that’s built automatically. Time to share it with the world!


In the final step we’ll submit our docset to Dash’s user-contributed repository, so other people can download it comfortably from Dash’s GUI. Conveniently, Dash uses a concept for the whole process that’s probably familiar to every open-source aficionado: GitHub pull requests.

First step is checking the Docset Contribution Checklist. Fortunately we – or in some cases doc2dash – have already taken care of everything!

So let’s move right along, and fork the https://github.com/Kapeli/Dash-User-Contributions repo and clone it to your computer.

First, you have to copy the Sample_Docset directory into docsets and rename it while doing so. For me the command line is therefore:

$ cp -a Sample_Docset docsets/structlog

Let’s enter the directory with cd docsets/structlog and take it from there further.

The main step is adding the docset itself – but as a gzipped tar file. The contribution guide even gives us the template for creating it. In my case the command line is:

$ tar --exclude='.DS_Store' -cvzf structlog.tgz structlog.docset

You may have noticed that I’ve already done the tar-ing in my tox file, so I just have to copy it over:

$ cp ~/FOSS/structlog/structlog.tgz .

It also wants the icons additionally to what is in the docset, so I copy them from them docset:

$ cp ~/FOSS/structlog/structlog.docset/icon* .

Next, it would like us to fill in metadata in the docset.html file which is straightforward in my case:

    "name": "structlog",
    "version": "22.1.0",
    "archive": "structlog.tgz",
    "author": {
        "name": "Hynek Schlawack",
        "link": "https://github.com/hynek"
    "aliases": []

Finally, it wants us to write some documentation about who we are and how to build the docset. After looking at other examples, I’ve settled on the following:

# structlog


Maintained by [Hynek Schlawack](https://github.com/hynek/).

## Building the Docset

### Requirements

- Python 3.10
- [*tox*](https://tox.wiki/)

### Building

1. Clone the [*structlog* repository](https://github.com/hynek/structlog).
2. Check out the tag you want to build.
3. `tox -e docset` will build the documentation and convert it into `structlog.docset` in one step.

The tox trick is paying off – I don’t have to explain Python packaging to anyone!

Don’t forget to delete stuff from the sample docset that we don’t use:

$ rm -r versions Sample_Docset.tgz

We’re done! Let’s check in our changes:

$ git checkout -b structlog
$ git add docsets/structlog
git commit -m "Add structlog docset"
[structlog 33478f9] Add structlog docset
 5 files changed, 30 insertions(+)
 create mode 100644 docsets/structlog/README.md
 create mode 100644 docsets/structlog/docset.json
 create mode 100644 docsets/structlog/icon.png
 create mode 100644 docsets/structlog/icon@2x.png
 create mode 100644 docsets/structlog/structlog.tgz
 $ git push -u

Looking good – time for a pull request!

A few hours later:

Our contributed structlog docset inside Dash!

Our contributed structlog docset inside Dash!

Big success: everyone can download the structlog Documentation Set now which concludes our little tutorial!


I hope I have both piqued your interest in API documentation browsers and demystified the creation of your own documentation sets. My goal is to turbocharge programmers who – like me – are overwhelmed by all the packages they have to keep in mind while getting stuff done.

My biggest hope, though, is that this article inspires someone to help me adding more formats to doc2dash, such that even more programmers get to enjoy the bliss of API documentation at their fingertips.

I’ve done a terrible job at promoting doc2dash in the past decade and I hope the next ten years will go better!

Adblock test (Why?)

Read the whole story
16 days ago
I want to try this tool. My memory is not what is used to be.
Share this story
Next Page of Stories