483 stories
·
2 followers

Computer scientists invent an efficient new way to count

1 Comment
By making use of randomness, a team has created a simple algorithm for estimating large numbers of distinct objects in a stream of data.
alt

Imagine that you’re sent to a pristine rainforest to carry out a wildlife census. Every time you see an animal, you snap a photo. Your digital camera will track the total number of shots, but you’re only interested in the number of unique animals — all the ones that you haven’t counted already. What’s the best way to get that number? “The obvious solution requires remembering every animal you’ve seen so far and comparing each new animal to the list,” said Lance Fortnow, a computer scientist at the Illinois Institute of Technology. But there are cleverer ways to proceed, he added, because if you have thousands of entries, the obvious approach is far from easy.

It gets worse. What if you’re Facebook, and you want to count the number of distinct users who log in each day, even if some of them log in from multiple devices and at multiple times? Now we’re comparing each new login to a list that could run to the billions.

In a recent paper, computer scientists have described a new way to approximate the number of distinct entries in a long list, a method that requires remembering only a small number of entries. The algorithm will work for any list where the items come in one at a time — think words in a speech, goods on a conveyor belt or cars on the interstate.

The CVM algorithm, named for its creators — Sourav Chakraborty of the Indian Statistical Institute, Vinodchandran Variyam of the University of Nebraska, Lincoln, and Kuldeep Meel of the University of Toronto — is a significant step toward solving what’s called the distinct elements problem, which computer scientists have grappled with for more than 40 years. It asks for a way to efficiently monitor a stream of elements — the total number of which may exceed available memory — and then estimate the number of unique elements.

“The new algorithm is astonishingly simple and easy to implement,” said Andrew McGregor of the University of Massachusetts, Amherst. “I wouldn’t be surprised if this became the default way the [distinct elements] problem is approached in practice.”

To illustrate both the problem and how the CVM algorithm solves it, imagine that you’re listening to the audiobook of Hamlet. There are 30,557 words in the play. How many are distinct? To find out, you could listen to the play (making frequent use of the pause button), write down each word alphabetically in a notebook, and skip over words already on your list. When you reach the end, you’ll just count the number of words on the list. This approach works, but it requires an amount of memory roughly equal to the number of unique words.

In typical data-streaming situations, there could be millions of items to keep track of. “You might not want to store everything,” Variyam said. And that’s where the CVM algorithm can offer an easier way. The trick, he said, is to rely on randomization.

Vinodchandran Variyam in a white shirt and gray jacket sits in front of a bookshelf.

Vinodchandran Variyam helped invent a technique to estimate the number of distinct elements in a stream of data.

Let’s return to Hamlet, but this time your working memory — consisting of a whiteboard — has room for just 100 words. Once the play starts, you write down the first 100 words you hear, again skipping any repeats. When the space is full, press pause and flip a coin for each word. Heads, and the word stays on the list; tails, and you delete it. After this preliminary round, you’ll have about 50 distinct words left.

Now you move forward with what the team calls Round 1. Keep going through Hamlet, adding new words as you go. If you come to a word that’s already on your list, flip a coin again. If it’s tails, delete the word; heads, and the word stays on the list. Proceed in this fashion until you have 100 words on the whiteboard. Then randomly delete about half again, based on the outcome of 100 coin tosses. That concludes Round 1.

Next, move to Round 2. Continue as in Round 1, only now we’ll make it harder to keep a word. When you come to a repeated word, flip the coin again. Tails, and you delete it, as before. But if it comes up heads, you’ll flip the coin a second time. Only keep the word if you get a second heads. Once you fill up the board, the round ends with another purge of about half the words, based on 100 coin tosses.

In the third round, you’ll need three heads in a row to keep a word. In the fourth round you’ll need four heads in a row. And so on.

Eventually, in the kth round, you’ll reach the end of Hamlet. The point of the exercise has been to ensure that every word, by virtue of the random selections you’ve made, has the same probability of being there: 1/2k. If, for instance, you have 61 words on your list at the conclusion of Hamlet, and the process took six rounds, you can divide 61 by the probability, 1/26, to estimate the number of distinct words — which comes out to 3,904 in this case. (It’s easy to see how this procedure works: Suppose you start with 100 coins and flip each one individually, keeping only those that come up heads. You’ll end up with close to 50 coins, and if someone divides that number by the probability, ½, they can guess that there were about 100 coins originally.)

Variyam and his colleagues mathematically proved that the accuracy of this technique scales with the size of the memory. Hamlet has exactly 3,967 unique words. (They counted.) In experiments using a memory of 100 words, the average estimate after five runs was 3,955 words. With a memory of 1,000 words, the average improved to 3,964. “Of course,” he said, “if the [memory] is so big that it fits all the words, then we can get 100% accuracy.”

“This is a great example of how, even for very basic and well-studied problems, there are sometimes very simple but non-obvious solutions still waiting to be discovered,” said William Kuszmaul of Harvard University.

Adblock test (Why?)

Read the whole story
GaryBIshop
2 days ago
reply
Wow! This is amazing! I love this. I have no use for it but wish I did.
Share this story
Delete

Urban Renewal Ruined Everything

1 Comment

The destructive nature of urban renewal left the U.S. too scared to build anything at scale ever again.

“America can’t build anything,” my co-worker, a talented data scientist from China, remarked as we rode a BART train through a suburb of Oakland. She was humored by the contrast of ultra-modern subways and dense apartments in Chengdu with the 1960s stucco houses and the Apollo 11-era BART metro system of the Bay Area. The mayor of San Francisco had a similar thought on her recent trip to China, marveling at the infrastructure built within a few years. We hear it a million times in American media: our infrastructure sucks. It takes too long to build a single home in most cities and $1.7 million to build a bathroom in San Francisco. Four stations on a mere six-mile VTA-led BART extension through mostly suburban San Jose will cost as much as $12 billion, more than double the annual war budget of Iran.

The culprit is many factors, but much of them originate with urban renewal — the aggressive bulldozing and re-development programs that peaked in the 1950s and 1960s. Ramped up by the Eisenhower Administration, aggressive public policy used infrastructure like freeways and property development to bulldoze low-income areas. The impacts persist to this day: the destruction of Black middle-class urban neighborhoods, vacant lots in once-thriving districts, the erasure of historic districts, the masses of jumbled freeways, and the mass migration of people out of cities.

Most stated urban renewal goals were total failures. Intending to revitalize business in the city core, most redevelopment projects were failures at attracting and sustaining private investment in cities, with a few exceptions. Intending to remove their Black population, urban renewal depressed property values and kept these neighborhoods majority Black, though depopulated and poorer. Urban renewal did convince most Americans that the government was incapable of delivering public projects. It had successfully frightened minorities that the government would usurp their lands and target them. It had convinced the white middle class that government programs were mostly tax-wasting, destructive projects and fueled the tax revolt of the 1970s.

The backlash to how urban renewal was conducted primarily blamed its top-down approach to urban planning. Federal and state bureaucrats from neighborhoods far away from areas they lived made radical decisions about people’s neighborhoods without their input. Post-urban renewal, major planning and consulting decisions were given to private, tax-exempt companies a.k.a. not-for-profits. By the 1980s and 1990s, a cottage industry of inner-city nonprofits acting as community middlemen emerged to do that job. Private companies had already been contracted as consultants and contractors for public development during urban renewal. Now, major planning aspects of public development are directly influenced by private organizations, which aren’t inherently democratic. Rather than building up cost-effective public capacity, public funds go to private companies to manage decisions and outreach.

Accompanying this change was the ballooning of public process and veto points, often in the name of “bottom-up” planning. In my opinion, the accurate takeaway from urban renewal was not that the government lacked checks and balances in public development, as is often told. Many affluent and middle-class neighborhoods were spared from bulldozers with very little resistance. Ugly freeways sprawled throughout Oakland yet ceased at the Berkeley border because Berkeley’s local government listened to its wealthier homeowners and vetoed the city’s freeway project. Oakland City Council did not care about its mostly poorer populace and invited freeways in with comparatively less fuss.

Urban renewal wasn’t as top-down as people say. The routing of freeways and public project placements were often determined by municipalities and states, not federal officials, and they chose targets like minority neighborhoods. Rather than recognizing that representation in government gave way to the destruction of neighborhoods, a myth has emerged that there weren’t enough meetings. These urban renewal projects were deliberated on for years or enshrined in local master plans decades prior— always without the explicit participation or solicitation of poor communities.

Today, publicly funded projects have an excessive number of meetings where very little if anything is accomplished in defense of the project, yet every meeting presents an opportunity to veto the project. Nor have equitable outcomes been achieved. Though not called “urban renewal” anymore, freeway construction and widening projects continue to ravage disproportionately ethnic minority and lower-income neighborhoods in 2024.

The public outreach process tends to heavily benefit older, retired, wealthier and home-owning residents that can go to city hall regularly. Holding more meetings doesn’t increase equitable outcomes, rather it gives that constituency more chances to veto a project. Working people and parents with under-aged kids often can’t spend time at city hall, waiting for hours before commenting on projects that may benefit or harm them. Excessive hearings significantly increase the cost of all projects because salaries must be paid to officials and consultants for every minor adjustment, both in the government and private sector.

Another failed attempt at reform was environmental law. The biggest mistake that early environmental advocacy made in the 1970s was suggesting that simply being opposed to development was environmentally friendly. Today, modern climate science understands that’s incorrect, but our laws treat carbon-reduction projects as equivalent to any net-carbon increase project. Bus lanes that would take cars off the road go through extremely costly and years-long environmental review. Same for infill housing or green energy projects, which can cost hundreds of thousands in review and permitting alone.

Environmental law often also fails to distinguish between existing uses predating these laws and new uses meant to combat them. The most glaring example would be California High-Speed Rail, which would provide tremendous carbon-reduction benefits on a crowded highway and air corridor. Yet because of the California Environmental Quality Act and the federal National Environmental Policy Act, high speed rail has been delayed by decades of litigation, obstruction and study. Yet the existing West Coast crowded airline corridor, suburban sprawl, and highways built before these laws came into effect cannot have the laws used against them. Even though their emissions grew beyond their initial projections and high-speed rail would reduce them.

Our entire approach to land-use is both insanely slow and produces carbon-intensive outcomes through status quoism, all thanks to urban renewal’s trauma. In Vienna, Austria their public housing developments are influenced and customized by local community boards, but they can not prohibit housing. Unlike the U.S. where zoning dictates what you can do with your property, zoning in Japan dictates what you can’t do with property. Infrastructure can be delivered quicker there, as land use focuses on regulating against harm rather than micro-managing all possible uses. The latter leads to cities like Half Moon Bay, California, dictating and downsize housing for low income farm workers who recently suffered a mass shooting, because the concerns of neighbors supersede the welfare of the public at large.

As frustrating as it is to wait 30 years for California High-Speed rail to finish, it's the punishment we pay for never truly atoning for the harm urban renewal did, and precisely how it did it. We live in a fantasy world where all government projects and development done efficiently come at the material cost of communities, rather than the truth which is that urban renewal was meant to destroy, not improve neighborhoods. It’s hard to envision a future where we’ll have good infrastructure in the United States.

For all the gags and bewilderment of American media in the late 2000s made about “Chinese Ghost Cities”, most of those cities are now well occupied. The Chinese government correctly predicted population growth and built cities ahead of time, although not in the most efficient manner. The United States ignored our bustling population needs because we can’t plan or build cities anymore, and within 10 years we went from a housing bubble to a housing shortage where rent is the primary cause of national inflation. U.S. cities barely plan for the future anymore and don’t have any extravagant ideas about how they’ll need to grow.

Adblock test (Why?)

Read the whole story
GaryBIshop
2 days ago
reply
A compelling argument.
Share this story
Delete

NC Senate passes bill to make it illegal to publicly mask for health reasons

1 Comment
NC Senate passes bill to make it illegal to publicly mask for health reasons

Enlarge (credit: Getty | Spencer Platt)

The North Carolina State Senate on Wednesday voted 30-15, along party lines, in favor of a Republican bill that would make it illegal for people in the state to wear a mask in public for health reasons. The bill is now moving to the House, where it could potentially see changes.

The proposed ban on health-based masking is part of a larger bill otherwise aimed at increasing penalties for people wearing masks to conceal their identity while committing a crime or impeding traffic. The bill was largely spurred by recent protests on university and college campuses across the country, including North Carolina-based schools, against the war in Gaza. In recent months, there have been demonstrations in Raleigh and Durham that have blocked roadways, as well as clashes on the nearby campus of University of North Carolina at Chapel Hill. Some demonstrators were seen wearing masks in those events.

But the bill, House Bill 237, goes a step further by making it illegal to wear a mask in public for health and safety reasons, either to protect the wearer, those around them, or both. Specifically, the bill repeals a 2020 legal exemption, enacted amid the COVID-19 pandemic, which allowed for public health-based masking for the first time in decades.

Read 4 remaining paragraphs | Comments

Read the whole story
GaryBIshop
3 days ago
reply
In the land where republicans rule...
Share this story
Delete

Creative Solution from Man Ordered by City to Build Privacy Fence

1 Comment and 3 Shares

Etienne Constable of Seaside, California is a boat owner. It's parked on a trailer in his driveway, in plain view. The city government informed Constable that this was illegal; the boat would need to be visually concealed behind a six-foot visual barrier.

Constable complied and had a six-foot gate built. Then he hired a local muralist, Hanif Panni, to adorn it:


Constable is now technically in compliance with local codes.

Images of the mural went viral, and according to KSBW News, "Panni says other Seaside residents have already asked for him to create murals for their boat fences."



Read the whole story
GaryBIshop
6 days ago
reply
Ha!
Share this story
Delete

Fantastic Industrial Design Student Work: "How Long Should Objects Last?"

1 Comment

This incredibly ambitious and thoroughly-executed project is by Charlie Humble-Thomas, done while pursuing his Masters in the Design Products program at the RCA. Called Conditional Longevity, it asks the question: "How long should objects last?"

Seeking the answer, Humble-Thomas tackles an oft-discarded object, the umbrella, and designs three variants: Recyclable, repairable, and durable. By evaluating each design and its manufacturing processes side-by-side, he wades into the all-important complexity that most manufacturers would like to avoid: The inevitable trade-offs inherent in each approach.

"I'm someone who gets a lot of joy from the histories of objects and the future of production," Humble-Thomas writes. (It's worth nothing that at the time of this project, Humble-Thomas was already an industrial designer with practical experience, having gained his Bachelor's six years previous.)

"My nostalgia around designers of the past, is matched by a curiosity in emergent materials and techniques. I've spent much of my time at the RCA investigating the industrial use of Hemp fibre & bioresins. The burning question has always been whether these solutions will solve more problems than they create. Prioritising methods which minimise harm to the ecology, create less emissions or promote biodiversity is not easy."
"My recent research has been looking into the issue of longevity and how long objects in our material world should last. I want to understand our disposable consumer culture better, but also try to define what techniques and materials suit which application. 'Conditional Longevity' is a project that maps out the gains and losses of strategies that designers use to extend or shorten the life of the physical items that furnish our lives. In equal parts, research into this longevity is both reassuring and bewildering. It proves that solutions to our problems are diverse and imperfect. Every object we bring into the world has a contextual backdrop, and every design decision is a compromise. The challenge is finding which compromises are the best to make."

Conditional Longevity

The objects we interact with daily all have a life expectancy that we often overlook.

People are regularly slipping into clichés with the 'right way' to make things because we have so few well demonstrated examples.

Mapping of Objects in terms of User Bond vs Physical Longevity helped identify some objects to explore:
Editor's note: This chart is worth examining. To see a larger, more legible version, click here.
Early candidates to evaluate longevity included a stool, a modular computer and a reusable coffee cup:

Then:

Challenging the root concept of an umbrella gave some interesting new angles on how to extend or shorten longevity.

Conditional Longevity asks the question, 'How long should objects last?' through the medium of umbrellas. If we hope to move towards a truly circular economy and rethink traditional consumerism, a better understanding of the consequences of the choices designers make is crucial. Each umbrella explores a unique take on approaching longevity, and users are presented with the impact data, downsides and benefits of each. The aim is to open up the debate on which strategies in product design are truly 'best' suited to our needs.
Every day we encounter objects with different life expectancies. We innocently use purchase or design items which if left unprocessed would sit on the face of the earth for 10 times the length of our lives. An example is a coffee lid, sometimes it's active use is less than six seconds, but it's 'actual' life may be over a thousand years.
The material, the manufacturing processes and the design of an object is just the tip of the iceberg. Our lived experience or relationship with an object is of equal, if not more importance in determining how long something 'lasts'. An example would be a simple teddy bear, a soft toy that many children build attachment to as a source of comfort, a possession that is treated as precious for years until we mature and let go.
Understanding how attachments are made through cultural norms, but also how we can build products that are more appropriate for their task, should help us develop a healthier relationship with our material world.

Conditional Longevity: Recyclable Umbrella

A broken umbrella is often a landfilled umbrella.

From the outset, the aim was to celebrate the potential of plastic using snap fit connections and to celebrate ribbing supports.

Left: Models helped demonstrate flexibility of PP. Right: a central hub made using additive manufacturing.

The working umbrella retains a purity & simplicity from using only undyed polymer throughout.

Components for the Recyclable Umbrella. Total weight of assembled Umbrella is 460g.

Many of the objects we use daily are made from mixed materials, ones are often difficult to separate. This cost can outweigh the value of the materials, so these objects are very likely to end up in Landfill. Of course, mixing materials offers functional benefits such as combinations of soft & hard structures, and nowhere else is this more true with Umbrellas. The Recyclable Umbrella is a reappraisal of the potential for plastic, a material which if properly managed offers carbon savings and excellent recyclability when compared with many organic alternatives.
After testing and research, Polypropylene was chosen as the material, with a natural flex, the ability to make non-woven fabric for the canopy, and an ever increasing recycling rate. The construction pushes the material to it's limits with a clip on canopy, snap fit arm pivots, a stay mechanism and heat welded canopy sections.
Being made from a pure material means recycling can happen with far fewer operations, and leaving the polymer undyed makes it much more valuable at the end of life stage. The ribbed sections across the umbrella then serve as a celebration of the material saving potential of plastic whilst also reinforcing the structure's stiffness.

Conditional Longevity: The Repairable Umbrella

An umbrella is a complex object to repair, so would pose a challenge for users.

A numbering system would simplify assembly & ordering of replacement parts.

Images showing the prototyping and assembly of the repairable umbrella:

The final umbrella with the removable canopy included:

Subassemblies of components for the Repairable Umbrella. Total weight of the assembled umbrella is 1.11kg.

The right to repair. Nostalgia over the fix it culture. Big corporations are in many cases now being forced to allow users to repair through legislation. Less glue and permanent fixings, with the hope less objects will end up in landfill. All of this in essence is a positive sentiment, but what does it mean in practice for Umbrellas?
Umbrellas are often riveted, press fitted or bonded together, and components are rarely off the shelf. This means when they break, repair becomes difficult and landfill becomes much more likely. By using familiar components at reasonable sizes, and making sure every part is replaceable, the aim was to create a truly 'Lego' like umbrella that could be assembled by the user and fixed whenever needed.
Reparability also shifts responsibility on the user rather than brands to maintain products. The mixing of components also creates issues as plastics, fabrics and metal fasteners would need to be mechanically separated for recycling. These non-permanent fixtures also create their own complexity in production and delivery of products. Assembled products then also naturally contain weak points when compared with those permanently fixed together.

Conditional Longevity: The Durable Umbrella

Made to last objects are often energy & resource hungry in production and the Durable umbrella is designed to reflect this.

The construction is purposefully over specified for its function, just like many of the products we know and love.

Functional prototypes of the mechanism & the handle:

Scan the QR code to see how the system would work.

The canopy structure is supported by carbon fibre struts which are highly energy intensive yet very strong and light.

CNC milling was used to create the stainless steel parts. Total weight of assembled umbrella: 1.71kg.

Long lasting, or lifetime guarantees are often hailed as being the key to reducing waste and saving energy in the long run. However, the assumption of 'less but better' being a superior approach to product design is rarely practically evaluated.
The third umbrella in the series takes durability to an almost cartoon-like level, by using ultra high performance materials such as carbon fibre and stainless steel. The character of the object is intended to emulate military grade construction but for use by a civilian, a trend that appears to be growing in our material culture.
With parts CNC milled and fixed or bonded together, and extra reinforcement given to weak areas identified through testing, the Umbrella in theory should last a lifetime. To encourage longer term care, the owners surname is engraved onto the handle alongside a QR code which can help tracing the umbrella if lost.

Conditional Longevity: The Conversation

Visualising the impact data of each umbrella.

Looking at 'number of uses' and comparing how many times one must be used to equal the other.

Draft of a data & description based 'Impact summaries' for objects.

Surveying people's beliefs around longevity highlighted contradictions between behaviours and environmental beliefs.

This project looks to start a richer conversation around our problematic relationship with the material world and the objects we own. It's about challenging our preconceptions, and unpicking our beliefs around what is 'right' or 'wrong'. At the moment, we have a limited set of objects through which we evaluate materials, for example, the shopping bag debate. Here despite disposable plastic bags being demonised recently, on closer inspection, the cotton bag alternative had to be used dozens if not hundreds of times to be worth it in terms of carbon emissions. Energy and carbon is also just one element of this story, with waste and recycling adding more layers of complexity.
The data for each umbrella tells us the weight, energy use and carbon emissions and allows a clear cut comparison between options. The practical impact of each umbrella is then decided based on how many uses an umbrella gets; if you used the Recyclable umbrella four times, you would need to use the Ultra durable umbrella twenty-five times to make the energy needed to produce it to be equal.
The most profound conclusion this project has given is that we must begin to assess how proportionate the objects we use are to their intended use. Their impact can be expressed as a ratio expressed below:

Object Impact Profile:

Resources & energy consumed to produce, use and recycle an object in its lifetime [benchmarked against an average of competitor products on the market]
--------------

Divided by

--------------

Useful Life:

Projected number of uses & likelihood of loss, reuse & recycling [benchmarked against an average of competitor products on the market]
I was very kindly supported in my final project by the Robin & Lucienne Day Foundation, to whom I am extremely grateful. Their ongoing support of students at the RCA and beyond is an asset to the industry.

This project was completed in 2021. Today Humble-Thomas, who is based in London, works as a freelance industrial designer.




Read the whole story
GaryBIshop
13 days ago
reply
Amazing student project.
Share this story
Delete

The world's loudest Lisp program to the rescue

1 Comment

It is interesting that while I think of myself as a generalist developer the vast portion of my career has been towards embedded and systems programming. I’m firmly a Common Lisp guy at heart but embedded tech landscape is the entrenched realm of C sprinkled with some C++ and nowadays Rust. However I had incredible fortune to work for the last few years on a substantial embedded system project in Common Lisp.

The story starts in Western Norway, the world capital of tunnels with over 650 located in the area. Tunnels are equipped and maintained to high standard and accidents are infrequent but by the nature of quantities serious ones get to happen. The worst of these are naturally fires, which are notoriously dangerous. Consider that many of single bore tunnels have length over 5km (and up to 24km). Some of them are undersea tunnels in the fjords with inclination of up to 10 degrees. There are no automatic firefighting facilities. These are costly both in installation and maintenance, and while they might work in a country with one or two tunnels total they simply do not scale up. Hence the policy follows the self evacuation principle: you’re on your own to help yourself and others to egress, hopefully managing to follow the signage and lights before the smoke sets in and pray the extractor fans do their job.

Aftermath of a fire

So far Norway have been spared of mass casualty tunnel fires but there have been multiple close calls. One of particularly unlucky ones, the 11.5km long Gudvangatunnelen had experienced three major fires in span of a few years. Thus national Road Administration put forth a challenge to develop a system to augment self-assisted evacuation. Norphonic, my employer, had won in a competition of nine contenders on the merits of our pre-existing R&D work. In late 2019 the project has officially started, and despite the setbacks of the pandemic concluded in 2021 with series production of the system now known as Evacsound. The whole development on this project was done by a lean team of:

  • software engineer who could also do some mechanical design and basic electronics
  • electrical engineer who could also code
  • two project engineers, dealing with product feasibility w.r.t. regulation and practices, taking care of SCADA integration and countless practicalities of automation systems for tunnels
  • project coordinator who communicated the changes, requirements and arranged tests with the Road Administration and our subcontractors
  • logistics specialist ensuring the flow of scores of shipments back and forth on the peak of pandemic

Live hacking Wesley, our EE patching up a prototype live

Atop of this we were also hiring some brilliant MEs and EEs as contractors. In addition two Norway’s leading research institutes handled the science of validating psychoacoustics and simulating fire detection.

At this point the system is already installed or is being installed in 6 tunnels in Norway with another 8 tunnels to some 29km total on order. We certainly do need to step up our international marketing efforts though.

In the tunnels

How do you approach a problem like this? The only thing that can be improved under self-evacuation is the flow of information towards people in emergency. This leaves us with eyesight and hearing to work with. Visual aids are greatly more flexible and easy to control. However their huge drawback is their usefulness expires quickly once the smoke sets in.

Sound is more persistent, although there are numerous challenges to using it in the tunnels:

  • The background noise from smoke extraction fans can be very high, and if you go for speech the threshold for intelligibility has to be at least 10dB over the noise floor
  • Public announcement messages alone are not very efficient. They are great in the early phase of fire to give heads up to evacuate, but kind of useless once the visibility is limited. At that point you also know you are in trouble already.
  • Speech announcements rely on comprehension of the language. In one of Gudvangatunnelen fires a bus full of foreign tourists who spoke neither English nor Norwegian had been caught in the thick of it. Fortunately a local lorry driver stopped by to collect them.
  • Acoustic environment in tunnels ranges from poor to terrible. Echo of 4-5 seconds in mid-range frequencies is rather typical.

In addition to above, the system should have still provided visual clues and allow for distributed temperature sensing for fire detection. It has also to withstand pressure wash along the tunnel wall, necessitating IP69 approval. On a tangent IPx9 is 100 bar 80C water jet pressure at 18 cm distance for 3 minutes, so Evacsound is of the most water protected speaker systems in the world.

We decided to start our design from psychoacoustics end and see where it falls for the rest. The primary idea was to evacuate people by aiding with directional sound signals that propagate towards the exits. The mechanism was worked out together with SINTEF research institute who conducted live trials on general population. This method was found effective, with over 90% of tests participants finding the way out based on directional sound aids alone. A combination of sound effect distance requirements and technical restrictions in the tunnel has led us to devices installed at 3m height along the wall at 25m intervals. Which was just as well, since it allowed both for application of acoustic energy in least wasteful, low reverberation manner and provided sensible intervals for radiated heat detection.

Node dissected

A typical installation is a few dozen to several hundred nodes in a single tunnel. Which brings us to the headline: we have projects that easily amount to tens of kilowatts acoustic power in operation, all orchestrated by Lisp code.

The hardware took nearly 20 design iterations until we reached what I would immodestly call the Platonic design for the problem. We were fortunate to have both mechanical and electronic design expertise from our other products. That allowed us to iterate at an incredible pace. Our software stack has settled on Yocto Linux and Common Lisp. Why CL? That’s what I started our earliest design studies with initially. Deadlines were tight, requirements were fluid, the team was small and I can move in Common Lisp really, really fast. I like to think that am also a competent C programmer but it was clear doing it in C would be many times the effort. And with native compilation there’s no performance handicap to speak of, so it is hard to justify a rewrite later.

Design iterations

Our primary CL implementation is LispWorks. There are some practical reasons for that.

  • Its tree shaker is really good. This allows our binaries to run on a system with 128 Mb RAM with room to spare, which at the scale of thousands devices manufactured helps keep the costs down.
  • It officially supports ARM32 with POSIX threads, something only it and CCL did at the time.
  • The garbage collector is very tunable.
  • There is commercial support available with implementors within the earshot. Not that we ended up using it much but the thought is soothing.

We however do use CCL liberally in development and we employ SBCL/x86 in the tests matrix. Testing across the three implementations has found a few quirks on occasions.

At its heart Evacsound is a soft real time, distributed system where a central stages time synchronized operation across hundreds of nodes. Its problem domain and operational circumstances add some constraints:

  1. The system shares comms infrastructure with other industrial equipment even though on own VLAN. Network virtualization abstraction breaks down in real time operation: the product has to tolerate load spikes and service degradation caused by other equipment yet be mindful of network traffic it generates.
  2. The operations are completely unmanned. There are no SREs; nobody’s on pager duty for the system. After commissioning there’s typically no network access for vendors to the site anyway. The thing have to sit there on its own and quietly do its job for the next couple decades until the scheduled tunnel renovation.
  3. We have experience designing no-nonsense hardware that lasts: this is how we have repeat business with Siemens, GE and other big players. But with sheer scale of installation you can count on devices going dark over the years. There will be hardware faults, accidents and possible battle attrition from fires. Evacsound has to remain operational despite the damage, allow for redundant centrals and ensure zero configuration maintenance/replacement of the nodes.

The first point has channeled us to using pre-uploaded audio rather than live streaming. This uses the network much more efficiently and helps to eliminate most synchronization issues. Remember that sound has to be timed accounting for propagation distances between the nodes, and 10 millisecond jitter gives you over 3 meters deviation. This may sound acceptable but a STIPA measurement will have no mercy. Then, the command and control structure should be flexible enough for executing elaborate plans involving sound and lighting effects yet tolerate inevitable misfortunes of real life.

The system makes heavy use of CLOS with a smattering of macros in places where it makes a difference. Naturally there’s a lot of moving parts in the product. We’re not going into the details of SCADA interfacing, power and resource scheduling, fire detection, self calibration and node replacement subsystems. The system has also distinct PA mode and two way speech communication using a node as a giant speakerphone: these two also add a bit of complexity. Instead we’re going to have an overview on the bits that make reliable distributed operation possible.

Test of fire detection

Processes

First step in establishing reliability baseline was to come up with abstraction for isolated tasks to be used both on the central and on the nodes. We built it on top of a thread pool, layering on top of it an execution abstraction with start, stop and fault handlers. These tie in to a watchdog monitor process with straightforward decision logic. An Evacsound entity would run a service registry where a service instance would look along these lines:

(register-service site
		  (make-instance 'avc-process :service-tag :avc
				 :closure 'avc-execution
				 :suspend-action 'avc-suspend
				 :resume-action 'avc-resume
				 :process-name "Automatic Volume Control"))

…and the methods that would be able to spin, quit, pause or resume the process based on its service-tag. This helps us ensure that we don’t ever end up with a backtrace or with an essential process quietly knocked out.

Plans

To perform its function Evacsound should be able to centrally plan and distributed execute elaborate tasks. People often argue what a DSL really is (and does it really have to have macros) but in our book if it’s special purpose, composable and is abstracted from implementation details it is one. Our planner is one example. We can create time distributed plans in abstract, we can actualize abstract plans with specific base time for operations, we can segment/concatenate/re-normalize plans in various ways. For instance, below is a glimpse of abstract plan for evacuation generated by the system:

(plan-modulo
 (normalize-plan
  (append (generate-plan (left accident-node)
			 :selector #'select-plain-nodes
			 :time-shift shift
			 :direction :left
			 :orientation :opposite)
 	  (generate-plan (right accident-node)
			 :selector #'select-plain-nodes
			 :time-shift shift
			 :direction :right
			 :orientation :opposite)))
 (* 10 +evacuation-effect-duration+))

We can see above that two plans for each evacuation direction are concatenated then re-normalized in time. The resulting plan is then modulo adjusted in time to run in parallel subdivisions of specified duration.

Generated plans are sets of node ID, effect direction and time delta tuples. They do not have association of commands and absolute times yet, which are the job of ACTUALIZE-PLAN.

Command Language

The central and nodes communicate in terms of CLOS instances of classes comprising the command language. In simplest cases they have just the slots to pass values on for the commands to be executed immediately. However with appropriate mixin they can inherit the properties necessary for precision timing control, allowing the commands to be executed in time synchronized manner across sets of nodes in plans.

It is established wisdom now that multiple inheritance is an anti-pattern, not worth the headache in the long run. However Evacsound make extensive use of it and over the years it worked out just fine. I’m not quite sure what the mechanism is that makes it click. Whether it’s because CLOS doesn’t suffer from diamond problem, or because typical treatment of objects using multiple dispatch methods, or something else it really is a non-issue and is a much better abstraction mechanism than composition.

Communication

The next essential task is communication. Depending on the plan we may communicate with all or subsets of nodes, in particular sequence or simultaneously, synchronously or async, with or without expectation of reported results. For instance we may want to get a noise estimation from microphones for volume control, and that would need to be done for all nodes at once while expecting a result set or reports. A PA message would have to be played synchronized but the result does not really matter. Or a temperature change notice may arrive unprompted to be considered by fire detection algorithm.

This particular diverse but restricted set of patterns wasn’t particularly well treated by existing frameworks and libraries, so we rolled our own on top of socket library, POSIX threads and condition variables. Our small DSL has two basic constructs, the asynchronous communicate> for outgoing commands and communicate< for expecting the result set, which can be composed as one operation communicate. A system can generate distributed command such as

(communicate (actualize-plan
	      (evacuation-prelude-plan s)
	      'fuse-media-file
	      (:base-time (+ (get-nanosecond-time) #.(2ns 1.8)))
	      :sample-rate 32000
	      :media-name "prelude"))

What happens here is that previously generated plan is actualized with FUSE-MEDIA-FILE command for every entry. That command inherits several timing properties:

  • absolute BASE-TIME set here explicitly
  • DELTA offset which is set from the plan’s pre-calculated time deltas
  • TIME-TO-COMPLETE (implicit here) which specifies expected command duration and is used to calculate composite timeout value for COMMUNICATE

If any network failure occurs, a reply from the node times out or node reports a malfunction an according condition is signaled. This mechanism allows us to effectively partition distributed networked operation failures into cases conveniently guarded by HANDLER-BIND wrappers. For instance, a macro that just logs the faults and continues the operation can be defined simply as:

(defmacro with-guarded-distributed-operation (&body body)
  `(handler-bind ((distributed-operation-failure
		   #'(lambda (c)
		       (log-info "Distibuted opearation issue with condition ~a on ~d node~:p"
				 (condition-name c) (failure-count c))
		       (invoke-restart 'communicate-recover)))
		  (edge-offline
		   #'(lambda (c)
		       (log-info "Failed to command node ~a" (uid c))
		       (invoke-restart 'communicate-send-recover))))
     ,@body))

This wrapper would guard both send and receive communication errors, using the restarts to proceed once the event is logged.

So the bird’s eye view is,

  • we generate the plans using comprehensible, composable, pragmatic constructs
  • we communicate in terms of objects naturally mapped from the problem domain
  • the communication is abstracted away into pseudo-transactional sets of distributed operations with error handling

Altogether it combines into a robust distributed system that is able to thrive in the wild of industrial automation jungle.

TL;DR Helping people escape tunnel fires with Lisp and funny sounds

Adblock test (Why?)

Read the whole story
GaryBIshop
17 days ago
reply
Wow! Amazing system.
Share this story
Delete
Next Page of Stories