620 stories
·
2 followers

Left to Right Programming

1 Comment
2025-08-17

Programs Should Be Valid as They Are Typed


I don’t like Python’s list comprehensions:

text = "apple banana cherry\ndog emu fox"
words_on_lines = [line.split() for line in text.splitlines()]

Don’t get me wrong, declarative programming is good. However, this syntax has poor ergonomics. Your editor can’t help you out as you write it. To see what I mean, lets walk through typing this code.

words_on_lines = [l

Ideally, your editor would be to autocomplete line here. Your editor can’t do this because line hasn’t been declared yet.

words_on_lines = [line.sp

Here, our editor knows we want to access some property of line, but since it doesn’t know the type of line, it can’t make any useful suggestions. Should our editor flag line as a non-existent variable? For all it knows, we might have meant to refer to some existing lime variable.

words_on_lines = [line.split() for line in

Okay, now we know that line is the variable we’re iterating over. Is split() a method that exists for line? Who knows!

words_on_lines = [line.split() for line in text.splitlines()]

Ah! now we know the type of line and can validate the call to split(). Notice that since text had already been declared, our editor is able to autocomplete splitlines().

This sucked! If we didn’t know what the split() function was called and wanted some help from our editor, we’d have to write

words_on_lines = [_ for line in text.splitlines()]

and go back to the _ to get autocomplete on line.sp


You deserve better than this.

To see what I mean, lets look at a Rust example that does it

let text = "apple banana cherry\ndog emu fox";
let words_on_lines = text.lines().map(|line| line.split_whitespace());

If you aren’t familiar with Rust syntax, |argument| result is an anonymous function equivilent to function myfunction(argument) { return result; }

Here, your program is constructed left to right. The first time you type line is the declaration of the variable. as soon as you type line. your editor is able to give you suggestions of

This is much more pleasent. Since the program is always in a somehwat valid state as you type it, your editor is able to guide you towards the Pit of Success.


There’s a principle in design called progressive disclosure. The user should only be exposed to as much complexity as is neccessary to complete a task. Additionally, complexity should naturally surface itself as it is relevant to the user. You shouldn’t have to choose a font family and size before you start typing into Word, and options to change text wrapping around images should appear when you add an image.

In C, you can’t have methods on structs. This means that any function that could be myStruct.function(args) has to be function(myStruct, args).

Suppose you have a FILE *file and you want to get it’s contents. Ideally, you’d be able to type file. and see a list of every function that is primarily concerned with files. From there you could pick read and get on with your day.

Instead, you must know that functions releated to FILE * tend to start with f, and when you type f the best your editor can do is show you all functions ever written that start with an f. From there you can eventually find fread, but you have no confidence that it was the best choice. Maybe there was a more efficient read_lines function that does exactly what you want, but you’ll never discover it by accident.

In a more ideal language, you’d see that a close method exists while you’re typing file.read. This gives you a hint that you need to close your file when you’re done with it. You naturally came accross this information right as it became relevant to you. In C, you have to know ahead of time that fclose is a function that you’ll need to call once you’re done with the file.


C is not the only language that has this problem. Python has plenty of examples too. Consider the following Python and JavaScript snippets:


text = "lorem ipsum dolor sit amet"
word_lengths = map(len, text.split())

text = "lorem ipsum dolor sit amet"
wordLengths = text.split(" ").map(word => word.length)

While Python gets some points for using a , the functions are not discoverable. Is string length len, length, size, count, num, or ? Is there even a global function for length? You won’t know until you try all of them.

In the JavaScript version, you see length as soon as you type word.l. There is less guesswork for what the function is named. The same is true for the map. When you type .map, you know that this function is going to work with the data you have. You aren’t going to get some weird error because the map function actually expected some other type, or because your language actually calls this function .


While the Python code in the previous example is still readable, it gets worse as the complexity of the logic increases. Consider the following code that was part of my 2024 Advent of Code solutions.

len(list(filter(lambda line: all([abs(x) >= 1 and abs(x) <= 3 for x in line]) and (all([x > 0 for x in line]) or all([x < 0 for x in line])), diffs)))

Yikes. You have to jump back and forth between the start and end of the line to figure out what’s going on. “Okay so we have the length of a list of some filter which takes this lambda… is it both of these conditions or just one? Wait which parenthesis does this go with…”

In JavaScript:

diffs.filter(line => 
    line.every(x => Math.abs(x) >= 1 && Math.abs(x) <= 3) &&
    (line.every(x => x > 0) || line.every(x => x < 0))
).length;

Ah, okay. We have some list of diffs, that we filter down based on two conditons, and then we return the number that pass. The logic of the program can be read from left to right!


All of these examples illustrate a common principle:

Programs should be valid as they are typed.

When you’ve typed text, the program is valid. When you’ve typed text.split(" "), the program is valid. When you’ve typed text.split(" ").map(word => word.length), the program is valid. Since the program is valid as you build it up, your editor is able to help you out. If you had a REPL, you could even see the result as you type your program out.

Make good APIs!

Adblock test (Why?)

Read the whole story
GaryBIshop
1 day ago
reply
True
Share this story
Delete

3D Layered Text: The Basics

1 Comment

Recently, a client asked me to create a bulging text effect. These are exactly the kinds of creative challenges I live for. I explored several directions, JavaScript solutions, SVG filters, but then I remembered the concept of 3D layered text. With a bit of cleverness and some advanced CSS, I managed to get a result I’m genuinely proud of.

Visually, it’s striking, and it’s also a perfect project to learn all sorts of valuable CSS animation techniques. From the fundamentals of layering, through element indexing, to advanced background-image tricks. And yes, we’ll use a touch of JavaScript, but don’t worry about it right now.

There is a lot to explore here, so this article is actually the first of a three part series. In this chapter, we will focus on the core technique. You will learn how to build the layered 3D text effect from scratch using HTML and CSS. We will cover structure, stacking, indexing, perspective, and how to make it all come together visually.

In chapter two, we will add movement. Animations, transitions, and clever visual variations that bring the layers to life.

In chapter three, we will introduce JavaScript to follow the mouse position and build a fully interactive version of the effect. This will be the complete bulging text example that inspired the entire series.

3D Layered Text Article Series

  1. The Basics (you are here!)
  2. Motion and Variations (coming August 20)
  3. Interactivity and Dynamism (coming August 22)

The Method

Before we dive into the text, let’s talk about 3D. CSS actually allows you to create some wild three-dimensional effects. Trust me, I’ve done it. It’s pretty straightforward to move and position elements in a 3D space, and have full control over perspective. But there’s one thing CSS doesn’t give us: depth.

If I want to build a cube, I can’t just give an element a width, a height, and a depth. There is no depth, it doesn’t work that way. To build a cube or any other 3D structure in CSS, we have two main approaches: constructive and layered.

Constructive

The constructive method is very powerful, but can feel a bit fiddly, with plenty of transforms and careful attention to perspective. You take a bunch of flat elements and assemble them together, somewhere between digital Lego bricks and origami. Each side of the shape gets its own element, positioned and rotated precisely in the 3D space. Suddenly, you have a cube, a pyramid, or any other structure you want to create.

And the results can be super satisfying. There’s something unique about assembling 3D objects piece by piece, watching flat elements transform into something with real presence. The constructive method opens up a world where you can experiment, improvise, and invent new forms. You could even, for example, build a cute robot bouncing on a pogo stick.

Layered

But here we’re going to focus on the layered method. This approach isn’t about building a 3D object out of sides or polygons. Instead, it’s all about stacking multiple layers, sometimes dozens of them, and using subtle shifts in position and color to create the illusion of depth. You’re tricking the eye into seeing volume and bulges where there’s really just a clever pile of flat elements.

This technique is super flexible. Think of a cube of sticky memo papers, but instead of squares, the papers are cut to shape your design. It’s perfect for text, 3D shapes, and UI elements, especially with round edges, and you can push it as far as your creativity (and patience) will take you.

Accessibility note: Keep in mind that this method can easily become a nightmare for screen reader users, especially when applied to text. Make sure to wrap all additional and decorative layers with aria-hidden="true". That way, your creative effects won’t interfere with accessibility and ensure that people using assistive technologies can still have a good experience.

Creating a 3D Layered Text

Let’s kick things off with a basic static example, using “lorem ipsum” as a placeholder (feel free to use any text you want). We’ll start with a simple container element with a class of .text. Inside, we’ll put the original text in a span (it will help later when we want to style this text separately from the layered copies), and another div with a class of “layers” where we’ll soon add the individual layers. (And don’t forget the aria-hidden.)

<div class="text">
  <span>Lorem ipsum</span>
  <div class="layers" aria-hidden="true"></div>
</div>

Now that we have our wrapper in place, we can start building out the layers themselves. In chapter three, we will see how to build the layers dynamically with JavaScript, but you can generate them easily with a simple loop in your preprocessor (if you are using one), or just add them manually in the code. Check out the pro tip below for a quick way to do that. The important thing is that we end up with something that looks like this.

<div class="layers" aria-hidden="true">
  <div class="layer"></div>
  <div class="layer"></div>
  <div class="layer"></div>
  <!-- ...More layers -->
</div>

Great, now we have our layers, but they are still empty. Before we add any content, let’s quickly cover how to assign their indexes.

Indexing the layers

Indexing simply means assigning each layer a variable (let’s call it --i) that holds its index. So, the first layer gets --i: 1;, the second gets --i: 2;, and so on. We’ll use these numbers later on as values for calculating each layer’s position and appearance.

There are a couple of ways to add these variables to your layers. You can define the value for each layer using :nth-child in CSS, (again, a simple loop in your preprocessor, if you’re using one), or you can do it inline, giving each layer element a style attribute with the right --i value.

.layer {
  &:nth-child(1): { --i: 1; }
  &:nth-child(2): { --i: 2; }
  &:nth-child(3): { --i: 3; }
  /* ... More layers */
}

…or:

<div class="layers" aria-hidden="true">
  <div class="layer" style="--i: 1;"></div>
  <div class="layer" style="--i: 2;"></div>
  <div class="layer" style="--i: 3;"></div>
  <!-- ...More layers -->
</div>

In this example, we will go with the inline approach. It gives us full control, keeps things easy to understand, and avoids dependency between the markup and the stylesheet. It also makes the examples copy friendly, which is great if you want to try things out quickly or tweak the markup directly.

Pro tip: If you’re working in an IDE with Emmet support, you can generate all your layers at once by typing .layer*24[style="--i: $;"] and pressing Tab. The .layer is your class, *24 is the number of elements, attributes go in square brackets [ ], and $ is the incrementing number. But, If you’re reading this in the not-so-distant future, you might be able to use sibling-index() and not even need these tricks. In that case, you won’t need to add variables to your elements at all, just swap out var(--i) for sibling-index() in the next code examples.

Adding Content

Now let us talk about adding content to the layers. Each layer needs to contain the original text. There are a few ways to do this. In the next chapter, we will see how to handle this with JavaScript, but if you are looking for a CSS-only dynamic solution, you can add the text as the content of one of the layer’s pseudo elements. This way, you only need to define the text in a single variable, which makes it a great fit for titles, short labels, or anything that might change dynamically.

.layer {
  --text: "Lorem ipsum";
  
  &::before {
    content: var(--text);
  }
}

The downside, of course, is that we are creating extra elements, and I personally prefer to save pseudo elements for decorative purposes, like the border effect we saw earlier. We will look at more examples of that in the next chapter.

A better, more straightforward approach is to simply place the text inside each layer. The downside to this method is that if you want to change the text, you will have to update it in every single layer. But since in this case the example is static and I do not plan on changing the text, we will simply use Emmet, putting the text inside curly braces {}.

So, we will type .layers*24[style="--i: $;"]{Lorem ipsum} and press Tab to generate the layers.

<div class="text">
  Lorem ipsum
  <div class="layers" aria-hidden="true">
    <div class="layer" style="--i: 1;">Lorem ipsum</div>
    <div class="layer" style="--i: 2;">Lorem ipsum</div>
    <div class="layer" style="--i: 3;">Lorem ipsum</div>
    <!-- ...More layers -->
  </div>
</div>

Let’s Position

Now we can start working on the styling and positioning. The first thing we need to do is make sure all the layers are stacked in the same place. There are a few ways to do this as well , but I think the easiest approach is to use position: absolute with inset: 0 on the .layers and on each .layer, making sure every layer matches the container’s size exactly. Of course, we’ll set the container to position: relative so that all the layers are positioned relative to it.

.text {
  position: relative;

  .layers, .layer {
    position: absolute;
    inset: 0;
  }
}

Adding Depth

Now comes the part that trips some people up, adding perspective. To give the text some depth, we’re going to move each layer along the z-axis, and to actually see this effect, we need to add a bit of perspective.

As with everything so far, there are a few ways to do this. You could give perspective to each layer individually using the perspective() function, but my recommendation is always to apply perspective at the parent level. Just wrap the element (or elements) you want to bring into the 3D world inside a wrapper div (here I’m using .scene) and apply the perspective to that wrapper.

After setting the perspective on the parent, you’ll also need to use transform-style: preserve-3d; on each child of the .scene. Without this, browsers flatten all transformed children into a single plane, causing any z-axis movement to be ignored and everything to look flat. Setting preserve-3d; ensures that each layer’s 3D position is maintained inside the parent’s 3D context, which is crucial for the depth effect to come through.

.scene {
  perspective: 400px;
  
  * {
    transform-style: preserve-3d;
  }
}

In this example, I’m using a fairly low value for the perspective, but you should definitely play around with it to suit your own design. This value represents the distance between the viewer and the object, which directly affects how much depth we see in the transformed layers. A smaller value creates a stronger, more exaggerated 3D effect, while a larger value makes the scene appear flatter. This property is what lets us actually see the z-axis movement in action.

Layer Separation

Now we can move the layers along the z-axis, and this is where we start using the index values we defined earlier. Let’s start by defining two custom properties that we’ll use in a moment: --layers-count, which holds the number of layers, and --layer-offset, which is the spacing between each layer.

.text {
  --layers-count: 24;
  --layer-offset: 1px;
}

Now let’s set the translateZ value for each layer. We already have the layer’s index and the spacing between layers, so all we need to do is multiply them together inside the transform property.

.layer {  
  transform: translateZ(calc(var(--i) * var(--layer-offset)));
}

This feels like a good moment to stop and look at what we have so far. We created the layers, stacked them on top of each other, added some content, and moved them along the z-axis to give them depth. And this is where we’re at:

If you really try, and focus hard enough, you might see something that kind of looks like 3D. But let’s be honest, it does not look good. To create a real sense of depth, we need to bring in some color, add a bit of shadow, and maybe rotate things a bit for a more dynamic perspective.

Forging Shadows

Sometimes we might want (or need) to use the value of --i as is, like in the last snippet, but for some calculations, it’s often better to normalize the value. This means dividing the index by the total number of layers, so we end up with a value that ranges from 0 to 1. By normalizing, we keep our calculations flexible and proportional, so the effect remains balanced even if the number of layers changes.

.layer {
  --n: calc(var(--i) / var(--layers-count));
}

Now we can adjust the color for each layer, or more precisely, the brightness of the color. We’ll use the normalized value on the ‘light’ of a simple HSL function, and add a touch of saturation with a bluish hue.

.layer {
  color: hsl(200 30% calc(var(--n) * 100%));
}

Gradually changing the brightness between layers helps create a stronger sense of depth in the text. And without it, you risk losing some of the finer details

Second, remember that we wrapped the original text in a span so we could style it? Now is the time to use it. Since this text sits on the bottom layer, we want to give it a darker color than the rest. Black works well here, and in most cases, although in the next chapter we will look at examples where it actually needs to be transparent.

span {
  color: black;
  text-shadow: 0 0 0.1em #003;
}

Final Touches

Before we wrap this up, let us change the font. This is of course a matter of personal taste or brand guidelines. In my case, I am going with a bold, chunky font that works well for most of the examples. You should feel free to use whatever font fits your style.

Let us also add a slight rotation to the text, maybe on the x-axis, so the lettering appears at a better angle:

.text {
  font-family: Montserrat, sans-serif;
  font-weight: 900;
  transform: rotateX(30deg);
}

And there you have it, combining all the elements we’ve covered so far: the layers, indexes, content, perspective, positioning, and lighting. The result is a beautiful, three-dimensional text effect. It may be static for now, but we’ll take care of that soon.

Wrapping Up

At this point, we have a solid 3D text effect built entirely with HTML and CSS. We covered everything from structure and indexing to layering, depth, and color. It may still be static, but the foundation is strong and ready for more.

In the next chapters, we are going to turn things up. We will add motion, introduce transitions, and explore creative ways to push this effect further. This is where it really starts to come alive.

3D Layered Text Article Series

  1. The Basics (you are here!)
  2. Motion and Variations (coming August 20)
  3. Interactivity and Dynamism (coming August 22)

3D Layered Text: The Basics originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
GaryBIshop
1 day ago
reply
Wow!
Share this story
Delete

Cheating on Quantum Computing Benchmarks

2 Comments

Peter Gutmann and Stephan Neuhaus have a new paper—I think it’s new, even though it has a March 2025 date—that makes the argument that we shouldn’t trust any of the quantum factorization benchmarks, because everyone has been cooking the books:

Similarly, quantum factorisation is performed using sleight-of-hand numbers that have been selected to make them very easy to factorise using a physics experiment and, by extension, a VIC-20, an abacus, and a dog. A standard technique is to ensure that the factors differ by only a few bits that can then be found using a simple search-based approach that has nothing to do with factorisation…. Note that such a value would never be encountered in the real world since the RSA key generation process typically requires that |p-q| > 100 or more bits [9]. As one analysis puts it, “Instead of waiting for the hardware to improve by yet further orders of magnitude, researchers began inventing better and better tricks for factoring numbers by exploiting their hidden structure” [10].

A second technique used in quantum factorisation is to use preprocessing on a computer to transform the value being factorised into an entirely different form or even a different problem to solve which is then amenable to being solved via a physics experiment…

Lots more in the paper, which is titled “Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog.” He points out the largest number that has been factored legitimately by a quantum computer is 35.

I hadn’t known these details, but I’m not surprised. I have long said that the engineering problems between now and a useful, working quantum computer are hard. And by “hard,” we don’t know if it’s “land a person on the surface of the moon” hard, or “land a person on the surface of the sun” hard. They’re both hard, but very different. And we’re going to hit those engineering problems one by one, as we continue to develop the technology. While I don’t think quantum computing is “surface of the sun” hard, I don’t expect them to be factoring RSA moduli anytime soon. And—even there—I expect lots of engineering challenges in making Shor’s Algorithm work on an actual quantum computer with large numbers.

Read the whole story
GaryBIshop
19 days ago
reply
Wow! Benchmarks do encourage cheating
Share this story
Delete
1 public comment
jepler
19 days ago
reply
A brutal but seemingly accurate takedown of current quantum factoring claims. Now, I didn't even realize that nontrivial factorizations had been claimed (apparently I'm out of the loop), but it turns out the numbers or their prime factors had patterns that made them easy to factor.

I don't fully agree with the paper's conclusion: Yes, it's great to set some requirements for the numbers used to show quantum factoring. But instead of setting out new criteria, use the criteria commonly in place for RSA-based systems [except for bit length] (FIPS); and/or use "the RSA numbers" (https://en.wikipedia.org/wiki/RSA_numbers), some specific values published back in 1991.

Also worth knowing: At least one of the previous factorization claims was itself an April Fools joke from 2020 (https://algassert.com/post/2000) but as far as I know the D-Wave result is presented as super serious) and the 2025 D-Wave paper (https://www.sciopen.com/article/10.26599/TST.2024.9010028) is relatively clear in stating that they are factoring a special class of number, albeit one that a FIPS-compliant RSA key generator would never have chosen.

Python program to factor the number shown in the D-Wave paper: https://gist.github.com/jepler/f99eb8f9a43221b4d6689010b95928c2 runtime is less than 30ms.
Earth, Sol system, Western spiral arm
jepler
19 days ago
> In 2023, Jin-Yi Cai showed that in the presence of noise, Shor's algorithm fails asymptotically almost surely for large semiprimes that are products of two primes in OEIS sequence A073024.[5] These primes p {\displaystyle p} have the property that p − 1 {\displaystyle p-1} has a prime factor larger than p 2 / 3 {\displaystyle p^{2/3}}, and have a positive density in the set of all primes
jepler
19 days ago
ok that's a very cool result: numbers that will make quantum factoring almost always fail. And there are plenty of them to be had.

Saturday Morning Breakfast Cereal - Number One

2 Comments and 4 Shares


Click here to go see the bonus panel!

Hovertext:
Available in the SMBC store now! Link is below. (PS: Credit to Richard McElreath for the language and graph on the mug)


Today's News:

Looking for dinnerware that expresses your quality as a parent clearly? This is it! Available in 11 oz. and 15 oz. with your preferred work description (mom, dad, or parent). Get it in the SMBC store HERE.

Credit goes to Richard McElreath for the language and graph on the mug.

Read the whole story
GaryBIshop
23 days ago
reply
Ha!
Share this story
Delete
1 public comment
jlvanderzwan
23 days ago
reply
I actually kind of want one

What is Chronic Pain, Really? (#2)

1 Comment

Missed Part 1? Read it here. Or keep scrolling for Part 2.

Hey there!

Before we get into this blog I just want to say an absolutely enormous thank you to everyone who read the first article a fortnight ago. I really cannot express how mind-blown and heartwarmed I have been by the support and vulnerability shown. So thank you and I hope you find the rest of the series interesting 💙

Now, let’s dig into part 2 and answer - what is chronic pain?

A weekend “getaway”

One year ago from today I went on a weekend getaway which changed my relationship with pain.

In a last ditch effort to get a grip on my latest chronic pain struggle in both of my hands, forearms and elbows, I resorted to big expensive magnets. Specifically the MRI machine kind. I wanted to know, once and for all, what was at fault in my body. Why else would I be in pain if not for a faulty body? Or so I thought at the time.

On the Saturday I had three scans done on my right arm. Having your arm outstretched in front of you for 30 minutes is not quite the embodiment of relaxation. But, keen for more, I returned the next day for my left arm.

So you might be wondering - why am I telling you about the time I spent lying in what seemed like a $2 million toilet roll?

Leading up to this, I’d briefly heard about the brain’s role in pain - but at most appointments I was told there was something wrong with my body, and that’s why I was feeling pain. For my hands/arms in particular - it was “lack of grip strength”, “tight muscles” and apparent “tendinopathy”.

Well, as it turned out, the scans revealed there was nothing physically wrong with either of my arms…

An aside, I did get lucky here. As we’ll soon see, many people have damage which doesn’t cause pain - so scans can be misleading.

But first, let’s nerd out for a moment on what was happening to me.

Something doesn’t need to be broken, to hurt (Pain ≠ Damage)

Until the 90’s, the leading explanation for pain was physical damage or injury. Think things like muscle, tendon, nerve or inflammatory issues.

In the decades since, there has been a gradual shift toward pain being looked at through three key components. Biological (i.e. the former model), psychological (what happens upstairs), and social (our environment). This is known as the biopsychosocial model.

With the realisation that pain is often caused by non-biological factors, in 2017 the International Association for the Study of Pain (IASP) formally recognised a new category of pain - nociplastic pain. In total there are now 3 recognised types of pain, each of which can be involved with chronic pain:

  • Nociceptive: What you experience when you pull a muscle at the gym or give yourself a paper cut.

  • Neuropathic: Caused by damage to the nerves in your body. Nerves are those small wires which send signals from your brain to let you scroll down this article and feel the sensation of your mouse/touchscreen.

  • Nociplastic: Simply put, this is pain which is not caused by damage in the body. It also goes by other names including neuroplastic pain, TMS and more.

Back to my weekend getaway - my body wasn’t damaged, and my nerves were fine. Nothing was broken. So I was experiencing nociplastic pain!

But here’s the interesting thing - not only can pain exist without damage - often there is damage without any pain at all.

Just because something’s broken, doesn’t mean it hurts (Damage ≠ Pain)

You might be thinking - if I find I have damage, surely that has to be the cause of my pain?

Well, a 2015 study looking at over 3,000 people’s spines found something wild:

  • 52% of pain-free 30 year olds had disk degeneration, and

  • 50% of pain-free 40 year olds had at least one disk bulge!

This indicates that (also) damage does not mean pain!

In my journey of working out what was going on, I came across a tear in the back of one of my shoulders - probably from teenage Dan lifting too much weight at the gym. Yet, I had no pain or symptom in that part of my body. In fact a whole study was done on this exact type of shoulder damage, and results were similar to the spine study from before - most of the people (note, who were older) had no pain in spite of their shoulder damage!

So while it can be helpful to have scans done - particularly to rule out serious structural causes like fractures, significant nerve impingement or cancer - it’s worth knowing not everything found is necessarily the cause of pain. Even if it’s precisely where symptoms are felt.

Lets look at a common culprit we met earlier (and the source of my pain) - nociplastic pain.

So I’m made out of plastic?!

Don’t worry, you’re not turning into a tupperware container.

The term “plastic” in nociplastic refers to neuroplasticity - your brain’s incredible ability to learn new things and form new habits. In the context of pain, this plasticity can often lead to it becoming chronic. And it’s a big deal…

Nociplastic pain is estimated to affect 25-75% of chronic pain sufferers. Either directly (e.g. my situation where there was no clear damage) or alongside a structural cause. And given roughly 1 in 5 adults globally (not just in Australia!) have chronic pain, nociplastic chronic pain likely accounts for the suffering of approximately 600 million adults. That’s getting close to the population of Europe!

To help illustrate how common this is - fibromyalgia, migraine, chronic fatigue syndrome (CFS) and irritable bowel syndrome (IBS) are just some conditions where nociplastic chronic pain plays a big role. It’s not just joint or tissue pain.

Given its endemic prevalence, this blog will mainly focus on nociplastic pain. From now on when you see the phrase “chronic pain”, it will mean chronic nociplastic pain (i.e. not tissue/nerve damage) unless otherwise called out.

Let’s explore some of the irregular things many of these 600 million regularly encounter, through some more personal examples.

Hypersensitivity: Feeling cooked whenever I’d cook

When I was living with chronic pain, things would just hurt more - and more often. Anything from light touch or exercise would lead to me not feeling well in those parts of my body.

Whenever my grandfather and I would shake hands, something we’ve done since I remember having hands, I would feel a bruising/fatigue sensation in my hand and forearm.

Whenever I would cook dinner, my wrists and elbows would (figuratively 👨‍🍳) get cooked.

Whenever I was typing away on my computer for more than an hour, (surprise!) my fingers would feel numb, bruised, tingly or like they were being lightly shocked.

In short, my brain’s pain processing abilities had become hypersensitive! We’ll dig into the details of this fascinating concept in the next article.

Widespread symptoms: The ol’ block of Swiss cheese

I had symptoms in a variety of places. Sometimes in the span of a day I would go from having pain in my upper limbs to pain in my achilles. Sometimes it would leave my legs alone and bounce between both of my arms.

It’s not uncommon for pain to move around to different parts of the body when someone is experiencing nociplastic pain. For conditions like fibromyalgia, part of the diagnosis criteria is having pain in multiple parts of the body.

While this sounds strange and should have been cause for me to realise earlier this wasn’t a normal tissue (or nerve) issue, when you have chronic pain the irregular becomes the regular.

Emotions: Pain as a language

Imagine for a moment you’re about to give a speech to a large room of people. Or you’re feeling a bit annoyed at your housemate who hasn’t cleaned their dishes in 3 days. In the first case you might feel a pain in your gut or like you want to break the 100m sprint world record to get to the bathroom. In the second it might be tension in your chest or neck. In these situations, physical discomfort can be a sign that you (and therefore your mind) are not ok with what’s happening.

Similar to this, during my chronic pain journey I would often notice flare-ups of worse symptoms around stressful moments. For example if I had a conflict with friends or if I was running late to something I cared about.

Even now, if I’m feeling overwhelmed or anxious (like when I released blog post #1 recently), I notice my body wanting to communicate with familiar sensations. However, because I now understand this connection I am able to respond instead of react.

As emotions and psychological distress are inseparable from pain, they will feature often in the next two articles.

Let’s look at a person you might know who went through this journey

No not me. Someone famous.

Some of you may know the TV shows ‘Seinfeld’ or ‘Curb Your Enthusiasm’. It turns out the creator of these shows, Larry David, suffered from chronic back pain and was able to recover by recognising the role his brain was playing.

Skip to 1:28 in the video clip below 📼

That’s it for now!

Next time, we’ll be opening up the hood of our car - I mean brain (drops spanner) - to understand the causes of chronic pain. We’ll then be in a great position to explore how to recover!

See you then ⛵️


Thanks for reading 💙 Please subscribe to receive new posts and support my work!


Hey, I’m Dan. I had chronic pain for several years, and now I’m writing to spread awareness about this condition as well as what I did to recover. I’m not a medical practitioner so please consult with one as part of your journey.

Got any questions or feedback? I’d love to hear from you!

Read the whole story
GaryBIshop
33 days ago
reply
Lots I can relate to here.
Share this story
Delete

“Reading Rainbow” was created to combat summer reading slumps

1 Comment
Tony, Pam and book review kids / FIGURE_25
At left, Tony Buttino prepares Western New York book reviewers (from left to right: Stephanie, Percy and Afrika) with production assistant Pam Johnson at right. Courtesy of WNED PBS, Buffalo, NY

For generations of readers, four words convey what it means to love books: “butterfly in the sky.” These are the opening lyrics of the famous theme song for the beloved children’s program “Reading Rainbow,” hosted by actor LeVar Burton. PBS premiered the show in 1983, and it ran for 23 years, received 26 Emmy Awards, and won the hearts of millions. The series, like the song, encouraged young readers to “take a look,” because whatever knowledge they sought, “it’s in a book.”

As those who watched the show know, “butterfly in the sky” became the words that always preceded a good story. The words are so significant to the legacy of “Reading Rainbow” that they serve as the namesake for the 2022 documentary about the cult classic program, Butterfly in the Sky: The Story of Reading Rainbow.

In recent years, numerous reports have warned of a national literacy crisis, for children and adults alike. “Reading Rainbow” is the essential example of a show about the importance of literacy, reading comprehension and the joy of storytelling, loved by viewers of all ages and from all backgrounds.

“In terms of child literacy rates going down, that was really what ‘Reading Rainbow’ was designed in response to, and in particular the ‘summer slide,’ they call it, which is when students are out of school, their literacy levels slide backward,” says Ryan Lintelman, entertainment curator at the Smithsonian’s National Museum of American History.

Lintelman says plenty of research confirms that, when students come back to school in the fall, teachers need to take some time to bring them back up to speed to the levels of literacy and reading comprehension that they were at when they ended the previous school year. “Reading Rainbow” was made to bridge this reading gap for children and improve their reading skills—and to be a show that kids wanted to watch.

It was known that kids were watching TV all summer, Lintelman says, “So why not do something with it?”

Butterfly in the Sky - Official Trailer (2024) Reading Rainbow Documentary

Fun fact: When did "Reading Rainbow" begin?

The show first aired in the summer of 1983 on PBS. It produced 23 seasons before it was canceled in 2006.

On each half-hour episode of “Reading Rainbow,” Burton introduced the real-world subject matter of a children’s book through field trips, from going to the barbershop to visiting an orchestra in a concert hall, and then the book itself was read by a performer. Ruby Dee, Helen Mirren, Keith David, Ed Harris, Gregory Hines, Jeff Bridges and Pete Seeger are just a few of the actors and musicians who graced the show with their voices. The chosen books are childhood staples, such as If You Give a Mouse a Cookie by Laura Numeroff, Amazing Grace by Mary Hoffman and Stellaluna by Janell Cannon. The program mixes the calm, steady presence of Burton with the warmth and coziness of getting wrapped up in a book. And at the end of the show, kids review books they’ve read.

For over two decades, the show achieved its goal of not only reading to kids, but also making kids want to read. In 1997, the Corporation for Public Broadcasting released a study about the use of television and video in classroom instruction. Current, a news outlet covering public media, reported that “teachers responding to the survey rated public TV programs as the best they’d used for educational purposes in 1996-97. ‘Reading Rainbow’ was named by a higher percentage of teachers than any other program.”

“I think it’s been shown to be super successful,” Lintelman says, adding that the show inspired a lifelong love of reading in its audience. “People who have been interviewed for this documentary talk about how ‘Reading Rainbow’ was such an important part of their lives and the entire generation. I count myself among them, of people who grew up watching that show and learning about new books through it.”

The curator notes that “Reading Rainbow” was unique in how it spoke about books, showing both the featured book’s illustrations and Burton’s real adventures seeing the people and places represented by the books. The show also made its host, Burton, who had been known for his role as Kunta Kinte in the 1970s miniseries “Roots,” into an icon of education. He later went on to star as Geordi La Forge in another beloved show, “Star Trek: The Next Generation,” which ran from 1987 to 1994.

Burton with book / FIGURE_21
Host LeVar Burton displays the book Gila Monsters Meet You at the Airport, which was featured in the original pilot episode of "Reading Rainbow." Courtesy of WNED PBS, Buffalo, NY

“Hearing the story from the people who made the show and hearing their evident passion and commitment to doing this children’s television show, every person feels like they were making a significant contribution to culture,” Burton told the Associated Press in 2024. “And they care deeply about what we were doing. I get it, I understand the importance that y’all have placed on this part of your lives, your childhoods. And I’m proud, genuinely proud, to be a part of your lives in this way.”

During the AP interview, Burton was asked if he ever thought that his most meaningful legacy, despite all his acting work in film and television, would be tied to “Reading Rainbow.”

“I’m good with that,” Burton said. “As a son of an English teacher, as a Black man, coming from a people for whom it was illegal to know how to read, not that long ago, I’m good with that.”

The story of “Reading Rainbow” begins in Buffalo, New York, as a WNED-TV program idea before it made its debut on PBS. Three people who worked on and helped make the show, Tony Buttino Sr., Barbara Irwin and Pam Johnson, collaborated on a 2024 book about the process called Creating Reading Rainbow: The Untold Story of a Beloved Children’s Series.

Working with educators and librarians on summer reading programs, Buttino led WNED-TV’s Educational Services Department at the time. Johnson was the liaison with the executives working on the show. And Irwin was an intern in that department at WNED, working alongside Buttino and Johnson. She worked on community outreach and on production of the book review segments of the show that were done in Buffalo for the first several years.

Buttino, as director of the department, led in the creation of the show. In searching for a “Reading Rainbow” host, Irwin says, “Tony knew that he wanted someone like Mister Rogers. He wanted a real person, somebody who could really communicate well with children.”

Burton was mostly known then for being on “Roots,” Irwin says. The team at WNED, who recognized his capability as a magnetic speaker and reached out to him about the job which he accepted, had found their host.

Today, Buttino is retired from his role at WNED, Irwin is retired from her work as a communications scholar at Canisius University, and Johnson is the executive director of the Corporation for Public Broadcasting’s Ready to Learn education initiative. The trio speaks fondly of their time working on “Reading Rainbow.”

To Combat Summer Reading Slumps, This Timeless Children’s Television Show Tried to Bridge the Literacy Gap With the Magic of Stories
From left to right: Barbara Irwin, Tony Buttino and Pam Johnson Courtesy of Irwin

Irwin says the first proposal that WNED submitted to the Corporation for Public Broadcasting for the show was for one season’s worth of episodes. That proposal was rejected. But the team was told to work on and submit a proposal for a pilot episode. The pilot episode was funded and then shot in November 1981, says Irwin. After the pilot, the team did more research to put together a new proposal to get the show made.

WNED partnered with Great Plains National in Lincoln, Nebraska, to submit the new proposal to the Kellogg Company and the Corporation for Public Broadcasting. That proposal was a success. In Buffalo and Nebraska, test audiences got to see the show before it made its national debut on PBS.

“This was intended to be a summer series,” Buttino says. But, of course, it ended up transcending that. “People loved the program.”

Like the fans who watched “Reading Rainbow,” Irwin, Buttino and Johnson have their own favorite episodes and books among the 155 total.

The old butterfly logo for "Reading Rainbow" / figure_36
The old butterfly logo for "Reading Rainbow" Courtesy of WNED PBS, Buffalo, NY

“The book that I really love is called Enemy Pie,” Johnson says. The 2000 book by Derek Munson tells the tale of a boy befriending a peer who he didn’t like at first after eating pie with him. “It’s a beautiful theme about welcoming in others,” she says.

Buttino and Irwin both appreciate Tight Times, and Irwin adds Bringing the Rain to Kapiti Plain as another special book, read by James Earl Jones on the show.

“Stories and storytelling are so important to the human experience,” Irwin says.

In the case of “Reading Rainbow,” those stories opened diverse worlds of possibilities for children. “Offering these opportunities to meet different kinds of people, to travel to different places, to experience different things, to go back in time, to go into the future, that’s the power of storytelling,” Irwin says. “And no series did it any better than ‘Reading Rainbow’ did.”

Get the latest on what's happening At the Smithsonian in your inbox.

Adblock test (Why?)

Read the whole story
GaryBIshop
33 days ago
reply
Good times!
Share this story
Delete
Next Page of Stories