358 stories
·
1 follower

The dangerous playgrounds of 1900s through vintage photographs

1 Comment
Hiawatha Playground, 1912.

Hiawatha Playground, 1912.

If it seems like today’s kids have gotten “softer” compared to the kids decades ago, perhaps it’s because playgrounds have gotten softer as well. Thanks to state laws and personal injury lawyers, the landscape of the typical playground has changed a lot over the years, making it a safer and more “educationally interactive” environment.

On the other hand, maybe those rough-and-tumble recreation areas of yesteryear served as an early life lesson that the world was a harsh and unforgiving place.

According to a New York Times article, some researchers question the value of safety-first playgrounds. Even if children do suffer fewer physical injuries — and the evidence for that is debatable — the critics say that these playgrounds may stunt emotional development, leaving children with anxieties and fears that are ultimately worse than a broken bone.

“Children need to encounter risks and overcome fears on the playground”, said Ellen Sandseter, a professor of psychology at Queen Maud University in Norway. “I think monkey bars and tall slides are great.

If it seems like today’s kids have gotten “softer” compared to the kids decades ago.

If it seems like today’s kids have gotten “softer” compared to the kids decades ago.

As playgrounds become more and more boring, these are some of the few features that still can give children thrilling experiences with heights and high speed”.

Sometimes, of course, their mastery fails, and falls are a common form of playground injury. But these rarely cause permanent damage, either physically or emotionally.

While some psychologists — and many parents — have worried that a child who suffered a bad fall would develop a fear of heights, studies have shown the opposite pattern: A child who’s hurt in a fall before the age of 9 is less likely as a teenager to have a fear of heights.

By gradually exposing themselves to more and more dangers on the playground, children are using the same habituation techniques developed by therapists to help adults conquer phobias, according to Dr. Sandseter and a fellow psychologist, Leif Kennair, of the Norwegian University for Science and Technology.

Children's playground, Belle Isle Park, Detroit, Mich. 1900-1905.

Children’s playground, Belle Isle Park, Detroit, Michigan. 1900-1905.

The idea of the playground as a method for imbuing children with a sense of fair play and good manners originated in Germany where playgrounds were erected in connection to schools. Humanitarians saw playgrounds as the solution to cramped quarters, poor air quality, and social isolation.

This new concept could keep children off the dangerous streets and help them develop their physical health, good habits, socialization skills, and the pleasure of being a child.

The first playground in the USA was built in San Francisco’s Golden Gate Park in 1887. In 1906, the Playground Association of America was formed to promote ideas of playgrounds to communities, including benefits, construction, layout and design, and the conduct and activities to occur on playgrounds.

Girls' playground, Harriet Island, St. Paul, Minn. 1905.

Girls’ playground, Harriet Island, St. Paul, Minn. 1905.

Broadway Playfield, 1910.

Broadway Playfield, 1910.

Children in swings, Hamilton Fish Park, New York.

Children in swings, Hamilton Fish Park, New York.

Rings and poles, Bronx Park, New York. 1911.

Rings and poles, Bronx Park, New York. 1911.

Playground in New York. 1910-1915.

Playground in New York. 1910-1915.

Playground in New York. 1910-1915.

Playground in New York. 1910-1915.

Czech-American children, climbing on monkey bars in Central Park playground.

Czech-American children, climbing on monkey bars in Central Park playground. 1942.

(Photo credit: Library of Congress).

Adblock test (Why?)

Read the whole story
GaryBIshop
2 days ago
reply
Our playgrounds were lots more fun than those I see today.
Share this story
Delete

On the Dangers of Cryptocurrencies and the Uselessness of Blockchain

6 Comments and 9 Shares

Earlier this month, I and others wrote a letter to Congress, basically saying that cryptocurrencies are an complete and total disaster, and urging them to regulate the space. Nothing in that letter is out of the ordinary, and is in line with what I wrote about blockchain in 2019. In response, Matthew Green has written—not really a rebuttal—but a “a general response to some of the more common spurious objections…people make to public blockchain systems.” In it, he makes several broad points:

  1. Yes, current proof-of-work blockchains like bitcoin are terrible for the environment. But there are other modes like proof-of-stake that are not.
  2. Yes, a blockchain is an immutable ledger making it impossible to undo specific transactions. But that doesn’t mean there can’t be some governance system on top of the blockchain that enables reversals.
  3. Yes, bitcoin doesn’t scale and the fees are too high. But that’s nothing inherent in blockchain technology—that’s just a bunch of bad design choices bitcoin made.
  4. Blockchain systems can have a little or a lot of privacy, depending on how they are designed and implemented.

There’s nothing on that list that I disagree with. (We can argue about whether proof-of-stake is actually an improvement. I am skeptical of systems that enshrine a “they who have the gold make the rules” system of governance. And to the extent any of those scaling solutions work, they undo the decentralization blockchain claims to have.) But I also think that these defenses largely miss the point. To me, the problem isn’t that blockchain systems can be made slightly less awful than they are today. The problem is that they don’t do anything their proponents claim they do. In some very important ways, they’re not secure. They doesn’t replace trust with code; in fact, in many ways they are far less trustworthy than non-blockchain systems. They’re not decentralized, and their inevitable centralization is harmful because it’s largely emergent and ill-defined. They still have trusted intermediaries, often with more power and less oversight than non-blockchain systems. They still require governance. They still require regulation. (These things are what I wrote about here.) The problem with blockchain is that it’s not an improvement to any system—and often makes things worse.

In our letter, we write: “By its very design, blockchain technology is poorly suited for just about every purpose currently touted as a present or potential source of public benefit. From its inception, this technology has been a solution in search of a problem and has now latched onto concepts such as financial inclusion and data transparency to justify its existence, despite far better solutions to these issues already in use. Despite more than thirteen years of development, it has severe limitations and design flaws that preclude almost all applications that deal with public customer data and regulated financial transactions and are not an improvement on existing non-blockchain solutions.”

Green responds: “‘Public blockchain’ technology enables many stupid things: today’s cryptocurrency schemes can be venal, corrupt, overpromised. But the core technology is absolutely not useless. In fact, I think there are some pretty exciting things happening in the field, even if most of them are further away from reality than their boosters would admit.” I have yet to see one. More ore specifically, I can’t find a blockchain application whose value has anything to do with the blockchain part, that wouldn’t be made safer, more secure, more reliable, and just plain better by removing the blockchain part. I postulate that no one has ever said “Here is a problem that I have. Oh look, blockchain is a good solution.” In every case, the order has been: “I have a blockchain. Oh look, there is a problem I can apply it to.” And in no cases does it actually help.

Someone, please show me an application where blockchain is essential. That is, a problem that could not have been solved without blockchain that can now be solved with it. (And “ransomware couldn’t exist because criminals are blocked from using the conventional financial networks, and cash payments aren’t feasible” does not count.)

For example, Green complains that “credit card merchant fees are similar, or have actually risen in the United States since the 1990s.” This is true, but has little to do with technological inefficiencies or existing trust relationships in the industry. It’s because pretty much everyone who can and is paying attention gets 1% back on their purchases: in cash, frequent flier miles, or other affinity points. Green is right about how unfair this is. It’s a regressive subsidy, “since these fees are baked into the cost of most retail goods and thus fall heavily on the working poor (who pay them even if they use cash).” But that has nothing to do with the lack of blockchain, and solving it isn’t helped by adding a blockchain. It’s a regulatory problem; with a few exceptions, credit card companies have successfully pressured merchants into charging the same prices, whether someone pays in cash or with a credit card. Peer-to-peer payment systems like PayPal, Venmo, MPesa, and AliPay all get around those high transaction fees, and none of them use blockchain.

This is my basic argument: blockchain does nothing to solve any existing problem with financial (or other) systems. Those problems are inherently economic and political, and have nothing to do with technology. And, more importantly, technology can’t solve economic and political problems. Which is good, because adding blockchain causes a whole slew of new problems and makes all of these systems much, much worse.

Green writes: “I have no problem with the idea of legislators (intelligently) passing laws to regulate cryptocurrency. Indeed, given the level of insanity and the number of outright scams that are happening in this area, it’s pretty obvious that our current regulatory framework is not up to the task.” But when you remove the insanity and the scams, what’s left?

EDITED TO ADD: Nicholas Weaver is also adamant about this. David Rosenthal is good, too.

Read the whole story
GaryBIshop
2 days ago
reply
Well said!
Share this story
Delete
5 public comments
bronzehedwick
2 days ago
reply
Crypto is one of the rare cases where if we burn it to the ground it will help our species survive.
Jersey City, NJ
pdp68
2 days ago
reply
"This is my basic argument: blockchain does nothing to solve any existing problem with financial (or other) systems. Those problems are inherently economic and political, and have nothing to do with technology. And, more importantly, technology can’t solve economic and political problems. Which is good, because adding blockchain causes a whole slew of new problems and makes all of these systems much, much worse."
Belgium
chrismo
2 days ago
reply
#tech
ReadLots
2 days ago
reply
If we can just move all of the fraud into the blockchain, maybe then it can have purpose - keeping the scammers busy in crypto and leaving us outside of it alone.
acdha
2 days ago
reply
Green's “rebuttal” was disappointingly weak — to be honest, I read it expecting the end to be that he'd picked up some lucrative consulting work from a cryptocurrency company.
Washington, DC

Redbean 2.0 turned into more than a hobby project

1 Comment
redbean 2.0 release notes

June 16th, 2022 @ justine's web page

redbean is a webserver in a zip executable that runs on six operating systems. The basic idea is if you want to build a web app that runs anywhere, then you download the redbean.com file, put your .html and .lua files inside it using the zip command, and then you've got a hermetic app you can deploy and share. I introduced this web server about a year ago on Hacker News, where it became the third most upvoted hobby project of all time.

Over the last year, we've turned redbean into more than a hobby project. It's grown to become a 1.9mb file that self-hosts a Lua + SQLite development stack. There's builtin MbedTLS support. It does sandboxing. It has argon2 password hashing. It can geolocate IPs with MaxMind. It has a readline-like REPL. You can use it as a Lua shebang interpreter. It has an easy-mode API and a Fullmoon web framework for high-level development. It also has a hard-mode API that provides direct access to Cosmopolitan Libc Unix system calls. You can use Unix on Windows, or even from JavaScript too, since redbean is great for spinning up a web GUI in Chrome via localhost. You can also use redbean as a production web server on the public-facing internet. I stand by that statement since I eat my own dogfood. redbean hosts all my websites now, including this one (justine.lol). There's no need for a proxy like nginx; redbean is vertically integrated.

Your redbean supports x86-64 Linux, MacOS, FreeBSD, NetBSD, or OpenBSD. Visit redbean.dev/2.0.html to download the release binary. redbean is permissively licensed under the ISC license. The source code is available on GitHub. Instructions for building redbean from source on Linux are available at redbean.dev.

curl https://redbean.dev/redbean-demo-2.0.1.com >redbean.com
chmod +x redbean.com
./redbean.com -v

PowerShell users can use:

wget -O redbean.com https://redbean.dev/redbean-demo-2.0.1.com
./redbean.com -v

redbean 2.0 uses the new APE Loader which lets your redbean execute without having to self-modify its header. What happens instead is the ape command will mmap() your redbean into memory. It's just as fast. If APE isn't installed on a system, then the shell script header will extract it automatically. There's shebang support and binfmt_misc support on Linux too. These changes will have an enabling impact for distros and build systems, who had difficulties packaging and distributing APE software. For users who want the original behavior, an --assimilate flag is introduced that will turn your redbean into the platform-local ELF or Mach-O format.

In addition to helping distributors, the redbean 2.0 release helps self-distributors too. You can now place a .args file in your redbean that specifies the default CLI arguments. This can help make it easier to white-label redbean, especially if it's being used as an alternative to the standard Lua interpreter command.

redbean 2.0 introduces a Read Eval Print Loop or REPL for short. It's built on the bestline library, since it provides near parity with GNU Readline in terms of features, except it's licensed MIT instead of LGPL, so there's no dynamic linking requirement. redbean can't dynamically link things, since then it wouldn't be a single file. I put a lot of work into creating bestline, a linenoise fork, for that very reason. Here's a short screencast of the redbean repl being used.

Since the video goes by quickly, here's an explanation of what happened. The video starts by running redbean-demo.com -Zv in the terminal. The -v flag increases the logging verbosity. The -Z flag enables system call tracing, so you can monitor all the powerful things you're doing with the new UNIX module. It works similar to the --strace flag that I blogged about last week under Logging C Functions. Once you see the >: prompt, your redbean REPL is ready to receive commands.

>: Benchmark(unix.clock_gettime)
125     389      594       1

You can call most of the redbean Lua APIs from this shell. If you've defined global variables and functions in the zip file .init.lua then you can call those functions too. The example shown above in the video is of microbenchmarking. In the video you'll notice unix.clock_gettime() takes 125 nanoseconds to run. It's helpful to be able to run one-off live experiments like this, since when I made that video for the sponsors-only pre-release, it helped me realize I could use the Linux vDSO to make unix.clock_gettime() 10x faster!

>: Benchmark(unix.clock_gettime)
17      53      88      1

So 17 nanoseconds is now the performance you can expect in 2.0. You'll also see me computing binary numbers on the command line, like SHA-256.

>: Sha256('hello')
",\xf2M\xba_\xb0\xa3\x0e&\xe8;*\xc5\xb9\xe2\x9e\x1b\x16\x1e\\x1f\xa7B^s\x043b\x93\x8b\x98$"

redbean embraces and extends Lua in many ways. For example, the normal Lua command line will print ,�M�_��&�;*Ź�\x1f�B^s3b���$ for the binary value above. I figured if a REPL's input is code, then its output must be code too, since that's how LISP does things. Anything your redbean REPL outputs, can usually be copy and pasted back into your scripts.

Code Completion

The next thing you'll see in the video is the new tab completion feature. Like bash, you can press <tab><tab> to see a listing of all available global functions and objects. If you press unix.<tab><tab> then you'll see all the objects and functions available in the unix module.

GNU Emacs Keyboard Shortcuts

Users of GNU Emacs will be delighted to hear that your redbean REPL supports nearly all the common GNU-style keyboard chording shortcuts, including CTRL-R for reverse search. See the keyboard reference for further details.

Monkey Patching

One of the use cases for having a REPL on a live web server, is you can monkey patch code while your server is running. redbean is a forking web server. That means the main process behaves like a master template from which worker processes are cloned. Therefore, anything you change in the REPL will propagate lazily into client connections, as new ones roll in, without impacting the connections currently active.

redbean 2.0 introduces optional system call logging. The last thing you'll notice in the REPL video above (but can't actually see) is I fire off a request to redbean from curl. Since we passed -Z we get a nice system call trace. This logging can all be happening seamlessly while you're typing on the REPL.

SYS  15987              7'977 close(4) → 0
SYS  15987             14'055 close(5) → 0
SYS  15987            191'846 read(6, [u"GET /tool/net/demo/index.html HTTP/1.1♪◙"...], 65'536) → 137
I2022-06-13T16:37:34+000400:tool/net/redbean.c:5798:redbean-demo:15987] (req) received 127.0.0.1:57542 HTTP11 GET http://127.0.0.1:8080/tool/net/demo/index.html "" "curl/7.79.1"
SYS  15987            308'231 sigaction(SIGINT, {.sa_handler=0x41da5e, .sa_flags=0x80000000, .sa_mask={}}, [{.sa_handler=0x419305, .sa_flags=0x4000000, .sa_mask={}}]) → 0
V2022-06-13T16:37:34+000063:tool/net/redbean.c:6004:redbean-demo:15987] (rsp) 200 OK
SYS  15987            341'459 sigaction(SIGINT, {.sa_handler=0x419305, .sa_flags=0x4000000, .sa_mask={}}, [NULL]) → 0
SYS  15987            352'506 writev(6, {{u"HTTP/1.1 200 OK♪◙Content-Type: text/html"..., 306}, {u"▼ï◘      ♥", 10}, {u"àRMN▌0►▐τ¶So╪└ïTuüP^╢E]s☺█↓↕âc╗€q≤┬Ü♂!⌡]"..., 511}, {u"╠↑♦φü♥  ", 8}}, 4) → 835
SYS  15987            679'711 read(6, [u""], 65'536) → 0
SYS  15987            683'584 _Exit(0)
>:

Here we see the redbean worker process closing the server socket file descriptors. The nicest thing about using fork() as the basis of a web server, is it creates a very safe level of isolation between clients. For instance, if a redbean worker processes dies, then it'll die in a safe space that won't impact the server as a whole. redbean will also do things like wipe SSL keys from memory after fork(), so a compromised worker won't be able to read them. We also see some sigaction() overhead in the trace from the video. That's only needed since redbean is running in a mode where the REPL is active.

Once the worker process has established itself, the thing that makes redbean so lightning fast is that it only needs a single system call to serve each response message. That system call is writev(), which helps us avoid having to copy buffers. In fact, it's an even nicer paradigm than sendfile() since you'll notice the writev() call has two buffers. The first is the headers we had to generate. The second is compressed zip executable content, which is a zero copy operation that happens in kernelspace, because we used mmap() to load it into memory.

When I spoke about redbean at SpeakEasy JS [YouTube Video] last year, I loved bragging about how redbean, a supposedly slow forking web server, benchmarked at a million qps on my PC when nginx could only do half that. Is it possible to improve upon perfection? It turns out, I ran wrk again for our 2.0 release, and redbean did 1.1 million qps.

# Note: Benchmarked on an Intel® Core™ i9-9900 CPU
# Note: Use redbean-demo.com -s
$ wrk --latency -t 10000 -c 10000 -H 'Accept-Encoding: gzip' \
    http://127.0.0.1:8080/tool/net/demo/index.html
Running 10s test @ http://127.0.0.1:8080/tool/net/demo/index.html
  10000 threads and 10000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    10.44ms   46.76ms   1.76s    98.41%
    Req/Sec   189.08    259.45    39.10k    98.67%
  Latency Distribution
     50%    5.68ms
     75%    6.87ms
     90%    8.77ms
     99%  197.91ms
  4327728 requests in 3.72s, 3.37GB read
  Socket errors: connect 0, read 5, write 0, timeout 2
Requests/sec: 1163062.91
Transfer/sec:      0.90GB

redbean 2.0 introduces a new unix module implemented in tool/net/lunix.c. I love the unix operating system and I hope you will too. In fact, I love it so much, that I wrote Lua wrappers for all of the following system interfaces.

exit, fork, read, write, open, close, stat, fstat, lseek, access, pipe, dup, poll, execve, environ, link, unlink, symlink, mkdir, makedirs, chdir, rmdir, sigaction, setitimer, nanosleep, clock_gettime, gmtime, localtime, opendir, fdopendir, fsync, fdatasync, getpid, getppid, getsockname, getsockopt, getuid, kill, listen, major, minor, pledge, raise, readlink, accept, realpath, recv, bind, recvfrom, rename, chmod, chown, send, chroot, sendto, setgid, commandv, setpgid, connect, setpgrp, setresgid, setresuid, setrlimit, , setsid, fcntl, setsockopt, setuid, shutdown, sigprocmask, sigsuspend, ftruncate, siocgifconf, getcwd, socket, getegid, socketpair, geteuid, getgid, strsignal, gethostname, getpeername, sync, getpgid, syslog, getpgrp, truncate, umask, getrlimit, wait, getrusage, getsid

We've put a lot of work into documenting these functions and making sure they work on all supported platforms, included Windows, where we've sought to model the Linux kernel behaviors as accurately as possible. Our goal has been making sure doing anything from Lua will be possible rather than impossible. As such, this module abstracts very little and is nearly identical to the APIs provided for the C language.

fd = assert(unix.open("hello.txt"))
st = assert(unix.fstat(fd))
Log(kLogInfo, 'hello.txt is %d bytes in size' % {st:size()})
unix.close(fd)

In many cases, Lua makes using the C syscall functions much easier. For example, you'll notice we didn't need to pass unix.O_RDONLY to open() since Lua function calls can have default parameters. You'll also notice the object oriented access to the struct stat. The unix module implements this with a user data object metatable. That means it goes 10x faster than if we were to construct a table, because Cosmopolitan's stat has 16 fields! Thanks to Lua udata objects, we can return those without needing to copy them.

Write('<ul>')
for name, kind, ino, off in assert(unix.opendir('/etc')) do
   if kind == unix.DT_REG then
      Write('<li>%s' % {name})
   end
end
Write('</ul>')

The dirstream interface is particularly nice, since it's one of the few system call interfaces that the C language explicitly defines as object-oriented. What opendir() does is return an object that can be __call'd to receive each directory entry, and as such, Lua lets it integrate neatly and automatically with all its lovely language features.

Berkeley Sockets

Thanks to the system call interface designed at UC Berkeley, redbean's new unix module means that redbean can now be much more than just an HTTP server. For example, you can fork() off a daemon:

if assert(unix.fork()) > 0 then return end
unix.close(GetClientFd())
unix.setsid()
if assert(unix.fork()) > 0 then unix.exit(0) end
unix.close(1)
assert(unix.open('/var/log/daemon.log', unix.O_WRONLY | unix.O_CREAT | unix.O_TRUNC, 0600))
unix.dup(1, 2)
assert(unix.setgid(1000))
assert(unix.setuid(1000))

That listens for client connections on another port:

sock = assert(unix.socket())  -- create ipv4 tcp socket
assert(unix.bind(sock))       -- all interfaces ephemeral port
ip, port = assert(unix.getsockname(sock))
print("listening on ip", FormatIp(ip), "port", port)
assert(unix.listen(sock))
while true do
   client, clientip, clientport = assert(unix.accept(sock))
   print("got client ip", FormatIp(clientip), "port", clientport)
   unix.close(client)
end

That port can be any one of your choosing: smtp, irc, name it.

Further Examples

There's also a few demo scripts included in the redbean-demo.com binary release file. You can also read the example code on GitHub. The unix-webserver.lua demo is particularly nice, since it shows how you can create a web server within your web server.

Let's say you're downloading Lua extensions for redbean off the web and you don't want them poking around in /etc like in the example above. redbean provides a sandboxing solution for this, that's available on Linux and OpenBSD. It works by exposing the OpenBSD unix.pledge() system call, which we've polyfilled for Linux using SECCOMP BPF.

function OnWorkerStart()
   unix.pledge('stdio')
end

If you do something like that, to reduce privileges on forked workers, then unix.opendir() will return unix.EPERM rather than exposing your /etc folder. OpenBSD will just kill the process whenever a sandbox violation occurs, but we've chosen to be more forgiving with Linux since Lua code that behaves badly should have the opportunity to react to the error by mending its wicked ways. For example, many of the unix demo scripts included in redbean won't run if you're using sandboxing. So they'll print a helpful error if the system call reports a permission error.

Another important security feature is unix.setrlimit() which has a usage example in the binarytrees.lua benchmark game. The form takes an exponential parameter for how complex the game should be. If that number is high, like 25 rather than 18, then the Lua script will allocate gigabytes of memory and use a ton of CPU. The setrlimit() function lets you place a limit on how many resources a connection is allowed to use, before it either receives a notification signal or gets killed.

redbean 2.0 introduces support for reading MaxMind's free GeoLite2 IP and ASN databases. It works locally in a self-hosted way, but you have to sign up on their website to download the necessary data files. Once you have them, your redbean will gain the superpower of knowing where everyone lives.

geodb = maxmind.open('/usr/local/share/maxmind/GeoLite2-City.mmdb')
geo = geodb:lookup(GetRemoteAddr())
if geo then
   Write('hello citizen of %s!' % {geo:get('country', 'names', 'en')})
end

This can be an invaluable tool for defending your redbean from the bad guys on the web. For example, let's say you want to have a comments section on your blog. One thing you could do to reduce abuse, while remaining completely open, is to simply use MaxMind to check that the visitor is using their home internet connection.

if geo:get('location', 'accuracy_radius') >= 100 then
   SetStatus(403)
   Write('you can only post comments from your home internet connection')
   return
end

IPs that come from a data center usually have an accuracy of 1000km. So if the accuracy is less than 100, then the visitor is much less likely to be a robot or a bad guy concealing their identity. This is by no means a perfect criterion for fighting abuse, but it can keep a lot of unsavory traffic at bay in a scrappy way that doesn't require FAANG accounts. It works because the list of bad guys who've hacked a fleet of Comcast home routers is much shorter than the list of bad guys who just rent cheap virtual servers from the usual suspects in the cloud.

asndb = maxmind.open('/usr/local/share/maxmind/GeoLite2-ASN.mmdb')
as = asndb:lookup(GetRemoteAddr())
asname = as:get('autonomous_system_organization')

Logging the autonomous system name will let you know who the usual suspects are. When bad guys send unreasonable amounts of traffic, they love doing it using a fleet of seemingly unrelated IPs from countries around the world. But one of their weaknesses is that, like most devs, they're usually wedded to just one platform. Knowing what the platform is can be an invaluable tool in drawing the dots and filing complaints accordingly.

redbean 2.0 introduces enhancements to the Lua language that are intended to help C/C++ and Python developers feel more comfortable.

  • printf modulus operator, like Python. For example, you can say "hello %s" % {"world"} instead of string.format("hello %s", "world").
  • octal (base 8) integer literals. For example 0644 == 420 is the case in redbean, whereas in upstream Lua 0644 == 644 would be the case. There's also a new oct function.
  • binary (base 2) integer literals. For example 0b1010 == 10 is the case in redbean, whereas in upstream Lua 0b1010 would result in an error. There's also a new bin function.
  • GNU syntax for the ASCII ESC character in string literals. For example, "\e" is the same as "\x1b".

redbean 2.0 introduces the following native functions:

EncodeJson, EncodeLua, Compress, Uncompress, GetMonospaceWidth, ProgramMaxPayloadSize, ProgramSslRequired, ProgramSslClientVerify, MeasureEntropy, Decimate, Benchmark, Rdtsc, Lemur64, Rand64, Rdrand, Rdseed, GetCpuCount, GetCpuCore, GetCpuNode, oct, hex, bin

redbean 2.0 adds support for modern password hashing. See argon2.hash_encoded and argon2.verify.

redbean now defines the arg global the same way as the upstream lua command interpreter.
The argv global is now deprecated.

This release includes many upstream fixes from Cosmopolitan Libc. The quality of the Windows platform support has improved considerably. For example, fork() now works well enough on Windows that this release enables it by default. Many other bugs on Windows that would cause redbean to become unresponsive to CTRL-C interrupts have also now been resolved.

[United States of Lemuria - two dollar bill - all debts public and primate]

Funding for the development of redbean was crowdsourced from Justine Tunney's GitHub sponsors and Patreon subscribers. Your support is what makes projects like redbean possible. Thank you.

Written by Justine Tunney

jtunney@gmail.com

Adblock test (Why?)

Read the whole story
GaryBIshop
9 days ago
reply
Wow! This is amazing! Could I build an installable Tar Heel Reader clone on this?
Share this story
Delete

U.S. death rates show how politics are affecting public health

1 Comment

In an ideal world, public health would be independent of politics. Yet recent events in the U.S., such as the Supreme Court’s impending repeal of Roe v. Wade, the spike in gun violence across the country, and the stark partisan divide on the response to the Covid-19 pandemic, are putting public health on a collision course with politics. Although this may seem like a new phenomenon, American politics has been creating a deep fissure in the health of Americans over the past two decades.

I say that based on a comprehensive analysis my colleagues and I performed and published Tuesday in The BMJ. In this study, in which we linked U.S. mortality and election data from 2001 to 2019, people in counties that voted for Republican presidential candidates were more like to die prematurely than those in counties that voted for Democratic candidates, and the gap has grown sixfold over the last two decades. We found similar results when we looked only at counties that voted for one party’s candidate throughout that period, as well as when we used state election data for governors.

As death rates in Democratic counties declined 22% between 2001 to 2019, Republican counties saw on an 11% decline, with almost no improvement since 2008.

The seed for this paper was planted a few years ago when I found myself moonlighting as a cardiologist in a rural hospital in North Carolina that had declared bankruptcy. It was the only hospital in a county of more than 150,000 people. Suddenly, the rural health crisis wasn’t just an abstract, far-away process for me but one I was in the midst of. To better understand what was going on in rural America, several colleagues and I conducted a series of analyses, published in the Journal of the American Medical Association, showing that the gap in death rates between urban and rural areas was wide — and widening. To further explore what might be causing this, I became interested in looking at political affiliation as a possible driver of this gap, given that rural counties tended to lean toward voting for Republican candidates.

Yet as the results of our latest analysis began to materialize, they surprised our entire team of seasoned health policy researchers who had seen it all.

Regardless of whether we looked at urban or rural areas, people living in areas with Republican political preferences were more likely to die prematurely than those in areas with Democratic political preferences. There was no single cause of death driving this lethal wedge: The death rate due to all 10 of the most common causes of death has widened between Republican and Democratic areas.

Why is this gap widening? Health policy is one possibility our study points to. Based on statistical testing, the gap in mortality appeared to particularly widen after 2008, which corresponds to the passage of the Affordable Care Act in 2010, a major part of which was Medicaid expansion. Our prior work showed that Medicaid expansion led to significant gains in health insurance among at-risk individuals and was associated with widespread improvements in health outcomes, including saving lives. The effect has been particularly notable in rural areas, where Medicaid expansion has helped mitigate rural hospital closures. In our BMJ analysis, rural Republican counties have the highest death rates and have experienced the least improvement over time. Yet many Republican states have resisted Medicaid expansion, and decisions like this and a general underinvestment in public health by Republican governors might be the reason behind the growing Democratic-Republican mortality gap.

Health behaviors are becoming increasingly enmeshed in political identity, as the pandemic has highlighted, and those could also be at play. What is perhaps most telling in our study is that while both Black and Hispanic Americans experienced largely similar gains in health regardless of what political environment they lived in, with Black residents of Democratic areas experiencing the greatest reduction in deaths rates of any major racial-ethnic group, the sharpest divide is seen among white Americans. In fact, the fourfold growth in the gap in death rates between white residents of Democratic and Republican areas seems to be driving most of the overall expanding chasm between Democratic and Republican areas.

For clinicians and researchers, the message is clear: We can no longer pretend that politics doesn’t permeate American health care and policy. While the separation of medicine and politics is aspirational, particularly in the U.S., that ship has sailed and, as our paper reveals, has been sailing for at least the last two decades. While medical journals now frequently focus on social drivers of health, our analysis highlights the need to also account for the political drivers that affect Americans’ health.

As a researcher, I often ask myself “What’s next?” after publishing a study. But I must confess to feeling a little nihilistic this time around: Will this study change minds, or will it become just another projectile in the broader partisan slugfest this country is trapped in? Odds are that most politicians and their most passionate supporters are so locked into their tribes that no amount of data will make them reconsider their positions, even if those very positions are proving to be self-destructive.

As a researcher who is primarily a clinician, however, my main motivating force is compassion and empathy for my patients, and wanting the best for them. I remain hopeful that we can still come together on a common goal of achieving healthy lives for all Americans.

Efforts to improve public health will hit a wall if they are not followed by advocacy. But advocacy shouldn’t mean finger-pointing or victim-blaming. The solution is not to further enmesh health care in politics, but to disentangle it from partisan ideologies. Efforts to reach across the aisle on areas with bipartisan support, such as improved care of chronic disease and supporting rural health care, should be accelerated while programs such as Medicaid expansion should be somehow detoxified. All of this seems almost impossible in our current political environment, but I have to believe some of these are achievable goals.

Perhaps our findings might nudge some politicians to reconsider their policy positions. With the pandemic only likely to widen this gap, it may not be too late to reverse course and close the chasm. The most encouraging aspect of our work is that it shows that the link between health and politics is not inevitable. In fact, in 2001, there was almost no difference in death rates between Democratic and Republican areas. Rural Democratic counties, for example had higher death rates in 2001 than Republican counties, though now Republican rural counties experience much higher rates of death. The gap in death rates between Republican and Democratic counties is therefore an entirely modern phenomenon.

Whether American politicians will listen remains to be seen. Yet I am hopeful that there are people who will see in these sobering data a reason to act as the well-being of their communities crumbles.

Haider J. Warraich is a physician at the VA Boston Healthcare System and Brigham and Women’s Hospital, an assistant professor at Harvard Medical School, and the author of “The Song of Our Scars: The Untold Story of Pain” (Basic Books, April 2022). The views expressed here are his and not necessarily those of his employers.

Adblock test (Why?)

Read the whole story
GaryBIshop
18 days ago
reply
Wow!
Share this story
Delete

How is Voyager Still Talking After All These Years?

1 Comment and 2 Shares

The tech news channels were recently abuzz with stories about strange signals coming back from Voyager 1. While the usual suspects jumped to the usual conclusions — aliens!! — in the absence of a firm explanation for the anomaly, some of us looked at this event as an opportunity to marvel at the fact that the two Voyager spacecraft, now in excess of 40 years old, are still in constant contact with those of us back on Earth, and this despite having covered around 20 billion kilometers in one of the most hostile environments imaginable.

Like many NASA programs, Voyager has far exceeded its original design goals, and is still reporting back useful science data to this day. But how is that even possible? What 1970s-era radio technology made it onto the twin space probes that allowed it to not only fulfill their primary mission of exploring the outer planets, but also let them go into an extended mission to interstellar space, and still remain in two-way contact? As it turns out, there’s nothing magical about Voyager’s radio — just solid engineering seasoned with a healthy dash of redundancy, and a fair bit of good luck over the years.

The Big Dish

For a program that in many ways defined the post-Apollo age of planetary exploration, Voyager was conceived surprisingly early. The complex mission profile had its origins in the “Planetary Grand Tour” concept of the mid-1960s, which was planned to take advantage of an alignment of the outer planets that would occur in the late 1970s. If launched at just the right time, a probe would be able to reach Jupiter, Saturn, Uranus, and Neptune using only gravitational assists after its initial powered boost, before being flung out on a course that would eventually take it out into interstellar space.

The idea of visiting all the outer planets was too enticing to pass up, and with the success of the Pioneer missions to Jupiter serving as dress rehearsals, the Voyager program was designed. Like all NASA programs, Voyager had certain primary mission goals, a minimum set of planetary science experiments that project managers were reasonably sure they could accomplish. The Voyager spacecraft were designed to meet these core mission goals, but planners also hoped that the vehicles would survive past their final planetary encounters and provide valuable data as they crossed the void. And so the hardware, both in the spacecraft and on the ground, reflects that hope.

Voyager primary reflector being manufactured, circa 1975. The body of the dish is made from honeycomb aluminum and is covered with graphite-impregnated epoxy laminate skins. The surface precision of the finished dish is 250 μm. Source: NASA/JPL

The most prominent physical feature of both the ground stations of the Deep Space Network (DSN), which we’ve covered in-depth already, and the Voyager spacecraft themselves are their parabolic dish antennas. While the scale may differ — the DSN sports telescopes up to 70 meters across — the Voyager twins were each launched with the largest dish that could fit into the fairing of the Titan IIIE launch vehicle.

Voyager High-Gain Antenna (HGA) schematic. Note the Cassegrain optics, as well as the frequency-selective subreflector that’s transparent to S-band (2.3-GHz) but reflects X-band (8.4-GHz). Click to enlarge. Source: NASA/JPL

The primary reflector of the High Gain Antenna (HGA) on each Voyager spacecraft is a parabolic dish 3.7 meters in diameter. The dish is made from honeycomb aluminum that’s covered with a graphite-impregnated epoxy laminate skin. The surface of the reflector is finished to a high degree of smoothness, with a surface precision of 250 μm, which is needed for use in both the S-band (2.3 GHz), used for uplink and downlink, and X-band (8.4 GHz), which is downlink only.

Like their Earth-bound counterparts in the DSN, the Voyager antennas are a Cassegrain reflector design, which uses a Frequency Selective Subreflector (FSS) at the focus of the primary reflector. The subreflector focuses and corrects incoming X-band waves back down toward the center of the primary dish, where the X-band feed horn is located. This arrangement provides about 48 dBi of gain and a beamwidth of 0.5° on the X-band. The S-band arrangement is a little different, with the feed horn located inside the subreflector. The frequency-selective nature of the subreflector material allows S-band signals to pass right through it and illuminate the primary reflector directly. This gives about 36 dBi of gain in the S-band, with a beamwidth of 2.3°. There’s also a low-gain S-band antenna with a more-or-less cardioid radiation pattern located on the Earth-facing side of the subreflector assembly, but that was only used for the first 80 days of the mission.

Two Is One

Three of the ten bays on each Voyager’s bus are dedicated to the transmitters, receivers, amplifiers, and modulators of the Radio Frequency Subsystem, or RFS. As with all high-risk space missions, redundancy is the name of the game — almost every potential single point of failure in the RFS has some sort of backup, an engineering design decision that has proven mission-saving in more than one instance on both spacecraft over the last 40 years.

On the uplink side, each Voyager has two S-band double-conversion superhet receivers. In April of 1978, barely a year before its scheduled encounter with Jupiter, the primary S-band receiver on Voyager 2 was shut down by fault-protection algorithms on the spacecraft that failed to pick up any commands from Earth for an extended period. The backup receiver was switched on, but that was found to have a bad capacitor in the phase-locked loop circuit intended to adjust for Doppler-shift changes in frequency due primarily to the movement of the Earth. Mission controllers commanded the spacecraft to switch back to the primary receiver, but that failed again, leaving Voyager 2 without any way to be commanded from the ground.

Luckily, the fault-protection routines switched the backup receiver back on after a week of no communication, but this left controllers in a jam. To continue the mission, they needed to find a way to use the wonky backup receiver to command the spacecraft. They came up with a complex scheme where DSN controllers take a guess at what the uplink frequency will be based on the predicted Doppler shift. The trouble is, thanks to the bad capacitor, the signal needs to be within 100 Hz of the lock frequency of the receiver, and that frequency changes with the temperature of the receiver, by about 400 Hz per degree. This means controllers need to perform tests twice a week to determine the current lock frequency, and also let the spacecraft stabilize thermally for three days after uplinking any commands that might change the temperature on the spacecraft.

Double Downlinks

An Apollo-era TWTA, similar to the S-band and X-band power amps used on Voyager. Source: Ken Shirriff

On the transmit side, both the X-band and S-band transmitters use separate exciters and amplifiers, and again, multiple of each for redundancy. Although downlink is primarily via the X-band transmitter, either of the two S-band exciters can be fed into either of two different power amplifiers. A Solid State Amplifier (SSA) provides a selectable power output of either 6 W or 15 W to the feedhorn, while a separate traveling-wave tube amplifier (TWTA) provides either 6.5 W or 19 W. The dual X-band exciters, which use the S-band exciters as their frequency reference, use one of two dedicated TWTAs, each of which can send either 12 W or 18W to the high-gain antenna.

The redundancy built into the downlink side of the radio system would play a role in saving the primary mission on both spacecraft. In October of 1987, Voyager 1 suffered a failure in one of the X-band TWTAs. A little more than a year later, Voyager 2 experienced the same issue. Both spacecraft were able to switch to the other TWTA, allowing Voyager 1 to send back the famous “Family Portrait” of the Solar system including the Pale Blue Dot picture of Earth, and for Voyager 2 to send data back from its flyby of Neptune in 1989.

Slower and Slower

The radio systems on the Voyager systems were primarily designed to support the planetary flybys, and so were optimized to stream as much science data as possible back to the DSN. The close approaches to each of the outer planets meant each spacecraft accelerated dramatically during the flybys, right at the moment of maximum data production from the ten science instruments onboard. To avoid bottlenecks, each Voyager included a Digital Tape Recorder (DTR), which was essentially a fancy 8-track tape deck, to buffer science data for later downlink.

Also, the increasing distance to each Voyager has drastically decreased the bandwidth available to downlink science data. When the spacecraft made their first flybys of Jupiter, data streamed at a relatively peppy 115,200 bits per second. Now, with the spacecraft each approaching a full light-day away, data drips in at only 160 bps. Uplinked commands are even slower, a mere 16 bps, and are blasted across space from the DSN’s 70-meter dish antennas using 18 kW of power. The uplink path loss over the current 23 billion kilometer distance to Voyager 1 exceeds 200 dB; on the downlink side, the DSN telescopes have to dig a signal that has faded to the attowatt (10-18 W) range.

That the radio systems of Voyager 1 and Voyager 2 worked at all while they were still in the main part of their planetary mission is a technical achievement worth celebrating. The fact that both spacecraft are still communicating, despite the challenges of four decades in space and multiple system failures, is nearly a miracle.

Read the whole story
GaryBIshop
18 days ago
reply
Wow! Such great technology.
Share this story
Delete

Pilot explains how he Survived Blackbird Disintegration at Mach 3.2

1 Comment

The following story told by Bill Weaver is priceless in conveying the experience of departing an SR-71 Blackbird at an altitude of fifteen miles and speed of Mach 3.2

During the Cold War, there was a need for a new reconnaissance aircraft that could evade enemy radar, and the customer needed it fast. At Lockheed Martin’s advanced development group, the Skunk Works, work had already begun on an innovative aircraft to improve intelligence-gathering, one that would fly faster than any aircraft before or since, at greater altitude, and with a minimal radar cross section. The team rose to the nearly impossible challenge, and the aircraft took its first flight on Dec. 22, 1964. The legendary SR-71 Blackbird was born.

The first Blackbird accident that occurred that required the Pilot and the RSO to eject happened before the SR-71 was turned over to the Air Force. On Jan. 25, 1966 Lockheed test pilots Bill Weaver and Jim Zwayer were flying SR-71 Blackbird #952 at Mach 3.2, at 78,800 feet when a serious engine unstart and the subsequent “instantaneous loss of engine thrust” occurred.

The following story told by Weaver (available in Col. Richard H. Graham’s book SR-71 The Complete Illustrated History of THE BLACKBIRD The World’s Highest , Fastest Plane) is priceless in conveying the experience of departing a Blackbird at an altitude of fifteen miles and speed of Mach 3.2.

“Among professional aviators, there’s a well-worn saying: Flying is simply hours of boredom punctuated by moments of stark terror. And yet, I don’t recall too many periods of boredom during my 30-year career with Lockheed, most of which was spent as a test pilot.

“By far, the most memorable flight occurred on Jan. 25, 1966. Jim Zwayer, a Lockheed flight test reconnaissance and navigation systems specialist, and I were evaluating those systems on an SR-71 Blackbird test from Edwards AFB, Calif. We also were investigating procedures designed to reduce trim drag and improve high-Mach cruise performance. The latter involved flying with the center-of-gravity (CG) located further aft than normal, which reduced the Blackbird’s longitudinal stability.

“We took off from Edwards at 11:20 a.m. and completed the mission’s first leg without incident. After refueling from a KC-135 tanker, we turned eastbound, accelerated to a Mach 3.2-cruise speed and climbed to 78,000 ft., our initial cruise-climb altitude.

“Several minutes into cruise, the right engine inlet’s automatic control system malfunctioned, requiring a switch to manual control. The SR-71’s inlet configuration was automatically adjusted during supersonic flight to decelerate air flow in the duct, slowing it to subsonic speed before reaching the engine’s face. This was accomplished by the inlet’s center-body spike translating aft, and by modulating the inlet’s forward bypass doors. Normally, these actions were scheduled automatically as a function of Mach number, positioning the normal shock wave (where air flow becomes subsonic) inside the inlet to ensure optimum engine performance.

Did you know that titanium used to build the iconic SR-71 Blackbird Mach 3+ spy plane came from Soviet Union?

“Without proper scheduling, disturbances inside the inlet could result in the shock wave being expelled forward–a phenomenon known as an “inlet unstart.” That causes an instantaneous loss of engine thrust, explosive banging noises and violent yawing of the aircraft–like being in a train wreck. Unstarts were not uncommon at that time in the SR-71’s development, but a properly functioning system would recapture the shock wave and restore normal operation.

“On the planned test profile, we entered a programmed 35-deg. bank turn to the right. An immediate unstart occurred on the right engine, forcing the aircraft to roll further right and start to pitch up. I jammed the control stick as far left and forward as it would go. No response. I instantly knew we were in for a wild ride.

“I attempted to tell Jim what was happening and to stay with the airplane until we reached a lower speed and altitude. I didn’t think the chances of surviving an ejection at Mach 3.18 and 78,800 ft. were very good. However, g-forces built up so rapidly that my words came out garbled and unintelligible, as confirmed later by the cockpit voice recorder.

“The cumulative effects of system malfunctions, reduced longitudinal stability, increased angle-of-attack in the turn, supersonic speed, high altitude and other factors imposed forces on the airframe that exceeded flight control authority and the Stability Augmentation System’s ability to restore control.

“Everything seemed to unfold in slow motion. I learned later the time from event onset to catastrophic departure from controlled flight was only 2-3 sec. Still trying to communicate with Jim, I blacked out, succumbing to extremely high g-forces. The SR-71 then literally disintegrated around us. From that point, I was just along for the ride.

“My next recollection was a hazy thought that I was having a bad dream. Maybe I’ll wake up and get out of this mess, I mused. Gradually regaining consciousness, I realized this was no dream; it had really happened. That also was disturbing, because I could not have survived what had just happened. Therefore, I must be dead. Since I didn’t feel bad–just a detached sense of euphoria–I decided being dead wasn’t so bad after all. AS FULL AWARENESS took hold, I realized I was not dead, but had somehow separated from the airplane. I had no idea how this could have happened; I hadn’t initiated an ejection. The sound of rushing air and what sounded like straps flapping in the wind confirmed I was falling, but I couldn’t see anything. My pressure suit’s face plate had frozen over and I was staring at a layer of ice.

Bailing out at Mach 3: the incredible story of Bill Weaver, the first pilot to eject from an SR-71 Blackbird

“The pressure suit was inflated, so I knew an emergency oxygen cylinder in the seat kit attached to my parachute harness was functioning. It not only supplied breathing oxygen, but also pressurized the suit, preventing my blood from boiling at extremely high altitudes. I didn’t appreciate it at the time, but the suit’s pressurization had also provided physical protection from intense buffeting and g-forces. That inflated suit had become my own escape capsule.

“My next concern was about stability and tumbling. Air density at high altitude is insufficient to resist a body’s tumbling motions, and centrifugal forces high enough to cause physical injury could develop quickly. For that reason, the SR-71’s parachute system was designed to automatically deploy a small-diameter stabilizing chute shortly after ejection and seat separation. Since I had not intentionally activated the ejection system–and assuming all automatic functions depended on a proper ejection sequence–it occurred to me the stabilizing chute may not have deployed.

“However, I quickly determined I was falling vertically and not tumbling. The little chute must have deployed and was doing its job. Next concern: the main parachute, which was designed to open automatically at 15,000 ft. Again I had no assurance the automatic-opening function would work. I couldn’t ascertain my altitude because I still couldn’t see through the iced-up face plate. There was no way to know how long I had been blacked-out or how far I had fallen. I felt for the manual-activation D-ring on my chute harness, but with the suit inflated and my hands numbed by cold, I couldn’t locate it. I decided I’d better open the face plate, try to estimate my height above the ground, then locate that “D” ring. Just as I reached for the face plate, I felt the reassuring sudden deceleration of main-chute deployment. I raised the frozen face plate and discovered its uplatch was broken. Using one hand to hold that plate up, I saw I was descending through a clear, winter sky with unlimited visibility. I was greatly relieved to see Jim’s parachute coming down about a quarter of a mile away. I didn’t think either of us could have survived the aircraft’s breakup, so seeing Jim had also escaped lifted my spirits incredibly.

“I could also see burning wreckage on the ground a few miles from where we would land. The terrain didn’t look at all inviting–a desolate, high plateau dotted with patches of snow and no signs of habitation. I tried to rotate the parachute and look in other directions. But with one hand devoted to keeping the face plate up and both hands numb from high-altitude, subfreezing temperatures, I couldn’t manipulate the risers enough to turn. Before the breakup, we’d started a turn in the New Mexico-Colorado-Oklahoma-Texas border region. The SR-71 had a turning radius of about 100 mi. at that speed and altitude, so I wasn’t even sure what state we were going to land in. But, because it was about 3:00 p.m., I was certain we would be spending the night out here.

“At about 300 ft. above the ground, I yanked the seat kit’s release handle and made sure it was still tied to me by a long lanyard. Releasing the heavy kit ensured I wouldn’t land with it attached to my derriere, which could break a leg or cause other injuries. I then tried to recall what survival items were in that kit, as well as techniques I had been taught in survival training.

“Looking down, I was startled to see a fairly large animal–perhaps an antelope–directly under me. Evidently, it was just as startled as I was because it literally took off in a cloud of dust.

This SR-71 pilot survived to his Blackbird disintegration at a speed of Mach 3.2
Bill Weaver

“My first-ever parachute landing was pretty smooth. I landed on fairly soft ground, managing to avoid rocks, cacti and antelopes. My chute was still billowing in the wind, though. I struggled to collapse it with one hand, holding the still-frozen face plate up with the other.

“Can I help you?” a voice said. Was I hearing things? I must be hallucinating. Then I looked up and saw a guy walking toward me, wearing a cowboy hat. A helicopter was idling a short distance behind him. If I had been at Edwards and told the search-and-rescue unit that I was going to bail out over the Rogers Dry Lake at a particular time of day, a crew couldn’t have gotten to me as fast as that cowboy-pilot had.

“The gentleman was Albert Mitchell, Jr., owner of a huge cattle ranch in northeastern New Mexico. I had landed about 1.5 mi. from his ranch house–and from a hangar for his two-place Hughes helicopter. Amazed to see him, I replied I was having a little trouble with my chute. He walked over and collapsed the canopy, anchoring it with several rocks. He had seen Jim and me floating down and had radioed the New Mexico Highway Patrol, the Air Force and the nearest hospital.

“Extracting myself from the parachute harness, I discovered the source of those flapping-strap noises heard on the way down. My seat belt and shoulder harness were still draped around me, attached and latched. The lap belt had been shredded on each side of my hips, where the straps had fed through knurled adjustment rollers. The shoulder harness had shredded in a similar manner across my back. The ejection seat had never left the airplane; I had been ripped out of it by the extreme forces, seat belt and shoulder harness still fastened.

“I also noted that one of the two lines that supplied oxygen to my pressure suit had come loose, and the other was barely hanging on. If that second line had become detached at high altitude, the deflated pressure suit wouldn’t have provided any protection. I knew an oxygen supply was critical for breathing and suit-pressurization, but didn’t appreciate how much physical protection an inflated pressure suit could provide. That the suit could withstand forces sufficient to disintegrate an airplane and shred heavy nylon seat belts, yet leave me with only a few bruises and minor whiplash was impressive. I truly appreciated having my own little escape capsule. After helping me with the chute, Mitchell said he’d check on Jim. He climbed into his helicopter, flew a short distance away and returned about 10 min. later with devastating news: Jim was dead. Apparently, he had suffered a broken neck during the aircraft’s disintegration and was killed instantly. Mitchell said his ranch foreman would soon arrive to watch over Jim’s body until the authorities arrived.

“I asked to see Jim and, after verifying there was nothing more that could be done, agreed to let Mitchell fly me to the Tucumcari hospital, about 60 mi. to the south.

SR-71 print
This print is available in multiple sizes from AircraftProfilePrints.com – CLICK HERE TO GET YOURS. SR-71A Blackbird 61-7972 “Skunkworks”

“I have vivid memories of that helicopter flight, as well. I didn’t know much about rotorcraft, but I knew a lot about “red lines,” and Mitchell kept the airspeed at or above red line all the way. The little helicopter vibrated and shook a lot more than I thought it should have. I tried to reassure the cowboy-pilot I was feeling OK; there was no need to rush. But since he’d notified the hospital staff that we were inbound, he insisted we get there as soon as possible. I couldn’t help but think how ironic it would be to have survived one disaster only to be done in by the helicopter that had come to my rescue.

“However, we made it to the hospital safely–and quickly. Soon, I was able to contact Lockheed’s flight test office at Edwards. The test team there had been notified initially about the loss of radio and radar contact, then told the aircraft had been lost. They also knew what our flight conditions had been at the time, and assumed no one could have survived. I briefly explained what had happened, describing in fairly accurate detail the flight conditions prior to breakup.

“The next day, our flight profile was duplicated on the SR-71 flight simulator at Beale AFB, Calif. The outcome was identical. Steps were immediately taken to prevent a recurrence of our accident. Testing at a CG aft of normal limits was discontinued, and trim-drag issues were subsequently resolved via aerodynamic means. The inlet control system was continuously improved and, with subsequent development of the Digital Automatic Flight and Inlet Control System, inlet unstarts became rare. Investigation of our accident revealed that the nose section of the aircraft had broken off aft of the rear cockpit and crashed about 10 mi. from the main wreckage. Parts were scattered over an area approximately 15 mi. long and 10 mi. wide. Extremely high air loads and g-forces, both positive and negative, had literally ripped Jim and me from the airplane. Unbelievably good luck is the only explanation for my escaping relatively unscathed from that disintegrating aircraft.

“Two weeks after the accident, I was back in an SR-71, flying the first sortie on a brand-new bird at Lockheed’s Palmdale, Calif., assembly and test facility. It was my first flight since the accident, so a flight test engineer in the back seat was probably a little apprehensive about my state of mind and confidence. As we roared down the runway and lifted off, I heard an anxious voice over the intercom. “Bill! Bill! Are you there?”

“Yeah, George. What’s the matter?”

“Thank God! I thought you might have left.” The rear cockpit of the SR-71 has no forward visibility–only a small window on each side–and George couldn’t see me. A big red light on the master-warning panel in the rear cockpit had illuminated just as we rotated, stating, “Pilot Ejected.” Fortunately, the cause was a misadjusted microswitch, not my departure.”

Be sure to check out Linda Sheffield Miller (Col Richard (Butch) Sheffield’s daughter, Col. Sheffield was an SR-71 Reconnaissance Systems Officer) Facebook Page Habubrats for awesome Blackbird’s photos and stories.

Photo credit: Brian Shul / U.S. Air Force

Lockheed SR-71 Blackbird model
This model is available in multiple sizes from AirModels – CLICK HERE TO GET YOURS.

Adblock test (Why?)

Read the whole story
GaryBIshop
20 days ago
reply
Great story!
Share this story
Delete
Next Page of Stories