631 stories
·
2 followers

Airbags, and How Mercedes-Benz Hacked Your Hearing

1 Comment

Airbags are an incredibly important piece of automotive safety gear. They’re also terrifying—given that they’re effectively small pyrotechnic devices that are aimed directly at your face and chest. Myths have pervaded that they “kill more people than they save,” in part due a hilarious episode of The Simpsons. Despite this, they’re credited with saving tens of thousands of lives over the years by cushioning fleshy human bodies from heavy impacts and harsh decelerations.

While an airbag is generally there to help you, it can also hurt you in regular operation. The immense sound pressure generated when an airbag fires is not exactly friendly to your ears. However, engineers at Mercedes-Benz have found a neat workaround to protect your hearing from the explosive report of these safety devices. It’s a nifty hack that takes advantage of an existing feature of the human body. Let’s explore how air bags work, why they’re so darn loud, and how that can be mitigated in the event of a crash.

A Lot Of Hot Air

The first patent for an airbag safety device was filed over 100 years ago, intended for use in aircraft. Credit: US Patent Office

Once an obscure feature only found in luxury vehicles, airbags became common safety equipment in many cars and trucks by the mid-1990s. Indeed, a particular turning point was when they became mandatory in vehicles sold in the US market from late 1998 onwards, which made them near-universal equipment in many other markets worldwide. Despite their relatively recent mainstream acceptance, the concept of the airbag actually dates back a lot farther.

The basic invention of the airbag is typically credited to two English dentists—Harold Round and Arthur Parrott—who submitted a patent for the concept all the way back in 1919. The patent regarded the concept of creating an air cushion to protect occupants in aircraft during serious impacts. Specific attention was given to the fact that the air cushion should “yield readily without developing the power to rebound,” which could cause further injury. This was achieved by giving the device air outlet passages that would vent as a person impacted the device, which would allow the cushion to absorb the hit gently while reducing the chance of injury.

The concept only later became applicable to automobiles when Walter Linderer filed for a German patent in 1951, and John W. Hetrick filed for a US patent in 1952. Both engineers devised airbags that were based on the release of compressed air, triggered either by human intervention or automated mechanical means. These concepts proved ultimately infeasible, as compressed air could not be feasibly be released to inflate an airbag quickly enough to be protective in an automobile crash.

It would only be later in the 1960s that workable versions using explosive or pyrotechnic inflation came to the fore. The concept was simple—use a chemical reaction to generate a great deal of gas near-instantaneously, inflating the airbag fractions of a second before vehicle occupants come into contact with the device. The airbags are fitted with vents that only allow the gas to escape slowly. This means that as a person hits the airbag, they are gently decelerated as their impact pushes the gas out of the restrictive vents. This helps reduce injuries that would typically be incurred if the occupants instead hit interior parts of the car without any protection at all.

In a crash, it’s much nicer to faceplant into an air-filled pillow than a hard, unforgiving dashboard. Credit: DaimlerChrysler AG, CC BY SA 3.0

The Big Bang

The use of pyrotechnic gas generators to inflate airbags was the leap forward that made airbags practical and effective for use in automobiles. However, as you might imagine, releasing a massive burst of gas in under 50 milliseconds does create a rather large pressure wave—which we experience as an incredibly loud sound. If you ever seen airbags detonated outside of a vehicle, you’ve probably noticed they sound rather akin to fireworks or a gun going off. Indeed, the sound of an airbag can exceed 160 decibels (dB)—more than enough to cause instant damage to the ear. Noise generated in a vehicle impact is often incredibly loud, too, or course. Ultimately, this isn’t great for the occupants of the vehicle, particularly their hearing. Ultimately, an airbag deployment is a carefully considered trade-off—the general consensus is that impact protection in a serious crash is preferable, even if your ears are worse for wear afterwards.

However, there is a technique that can mitigate this problem. In particular, Mercedes-Benz developed a system to protect the hearing of vehicle occupants in the event that the airbags are fired. The trick is in using the body’s own reactions to sound to reduce damage to the ear from excessive sound pressure levels.

In humans, the stapedius muscle can be triggered reflexively to protect the ear from excess sound levels, though the mechanism is slow enough that it can’t respond well to sudden loud impulses. However, pre-emptively triggering it before a loud event can be very useful. Credit: Mercedes Benz

The stapedius reflex (also known as the acoustic reflex) is one of the body’s involuntary, instantaneous movements in response to an external stimulus—in this case, certain sound levels. When a given sound stimulus occurs to either ear, muscles inside both ears contract, most specifically the stapedius muscle in humans. When the muscle contracts, it has a stiffening effect on the ossicular chain—the three tiny bones that connect the ear drum to the cochlea in the inner ear. Under this condition, less vibrational energy is transferred, reducing damage to the cochlea from excessive sound levels.

The threshold at which the reflex is triggered is usually 10 to 20 dB lower than the point at which the individual feels discomfort; typical levels are from around 70 to 100 dB. When triggered by particularly loud sounds of 20 dB above the trigger threshold, the muscle contraction is enough to reduce the sound level at the cochlea by a full 15 dB. Notably, the reflex is also triggered by vocalization—reducing transmission through to the inner ear when one begins to speak.

Mercedes-Benz engineers realized that the stapedius reflex could be pre-emptively triggered ahead of firing the airbags, in order to provide a protective effect for the ears. To this end, the company developed the PRE-SAFE Sound system. When the vehicle’s airbag control unit detects a collision, it triggers the vehicle’s sound system to play a short-duration pink noise signal at a level of 80 dB. This is intended to be loud enough to trigger the stapedius reflex without in itself doing damage to the ears. Typically, it takes higher sound levels closer to 100 dB  to reliably trigger the reflex in a wide range of people, but Mercedes-Benz engineers realized that the wide-spread frequency content of pink noise enable the reflex to be switched on at a much lower, and safer, sound level. With the reflex turned on, when the airbags do fire a fraction of a second later, less energy from the intense pressure spike will be transferred to the inner ear, protecting the delicate structures that provide the sense of hearing.

Mercedes-Benz first released the technology in production models almost a decade ago.

The stapedius reflex does have some limitations. It can be triggered with a latency of just 10 milliseconds, however, it can take up to 100 milliseconds for the muscle in the ear to reach full tension, conferring the full protective effect. This limits the ability of the reflex to protect against short, intense noises. However, given the Mercedes-Benz system triggers the sound before airbag inflation where possible, this helps the muscles engage prior to the peak sound level being reached. The protective effect of the stapedius reflex also only lasts for a few seconds, with the muscle contraction unable to be maintained beyond this point. However, in a vehicle impact scenario, the airbags typically all fire very quickly, usually well within a second, negating this issue.

Mercedes-Benz was working on the technology from at least the early 2010s, having run human trials to trigger the stapedius reflex with pink noise in 2011. It deployed the technology on its production vehicles almost a decade ago, first offering PRE-SAFE Sound on E-Class  models for the 2017 model year. Despite the simple nature of the technology, few to no other automakers have publicly reported implementing the technique.

Car crashes are, thankfully, rather rare. Few of us are actually in an automobile accident in any given year, even less in ones serious enough to cause an airbag deployment. However, if you are unlucky enough to be in a severe collision, and you’re riding in a modern Mercedes-Benz, your ears will likely thank you for the added protection, just as your body will be grateful for the cushioning of the airbags themselves.

Read the whole story
GaryBIshop
8 days ago
reply
Great idea!
Share this story
Delete

Fire destroys S. Korean government's cloud storage system, no backups available

1 Comment
Officials move a burnt battery at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]

Officials move a burnt battery at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]

 
A fire at the National Information Resources Service (NIRS)'s Daejeon headquarters destroyed the government’s G-Drive cloud storage system, erasing work files saved individually by some 750,000 civil servants, the Ministry of the Interior and Safety said Wednesday.
 
The fire broke out in the server room on the fifth floor of the center, damaging 96 information systems designated as critical to central government operations, including the G-Drive platform. The G-Drive has been in use since 2018, requiring government officials to store all work documents in the cloud instead of on personal computers. It provided around 30 gigabytes of storage per person.
 

Related Article

 
However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.
 
The scale of damage varies by agency. The Ministry of Personnel Management, which had mandated that all documents be stored exclusively on G-Drive, was hit hardest. The Office for Government Policy Coordination, which used the platform less extensively, suffered comparatively less damage.
 
The Personnel Ministry stated that all departments are expected to experience work disruptions. It is currently working to recover alternative data using any files saved locally on personal computers within the past month, along with emails, official documents and printed records.
 
A firefighter cools down burnt batteries at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]

A firefighter cools down burnt batteries at the National Information Resources Service (NIRS) in Daejeon on Sept. 27. [YONHAP]

 
The Interior Ministry noted that official documents created through formal reporting or approval processes were also stored in the government’s Onnara system and may be recoverable once that system is restored.
 
“Final reports and official records submitted to the government are also stored in OnNara, so this is not a total loss,” said a director of public services at the Interior Ministry.
 
The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups. This vulnerability ultimately left it unprotected.
 
Criticism continues to build regarding the government's data management protocols.

This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.
BY JEONG JAE-HONG [[email protected]],D

Adblock test (Why?)

Read the whole story
GaryBIshop
8 days ago
reply
Ouch! No backups! Who thought that was a good idea?
Share this story
Delete

Digital Threat Modeling Under Authoritarianism

1 Comment

Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an exercise in threat modeling.

In security, threat modeling is the process of determining what security measures make sense in your particular situation. It’s a way to think about potential risks, possible defenses, and the costs of both. It’s how experts avoid being distracted by irrelevant risks or overburdened by undue costs.

We threat model all the time. We might decide to walk down one street instead of another, or use an internet VPN when browsing dubious sites. Perhaps we understand the risks in detail, but more likely we are relying on intuition or some trusted authority. But in the U.S. and elsewhere, the average person’s threat model is changing—specifically involving how we protect our personal information. Previously, most concern centered on corporate surveillance; companies like Google and Facebook engaging in digital surveillance to maximize their profit. Increasingly, however, many people are worried about government surveillance and how the government could weaponize personal data.

Since the beginning of this year, the Trump administration’s actions in this area have raised alarm bells: The Department of Government Efficiency (DOGE) took data from federal agencies, Palantir combined disparate streams of government data into a single system, and Immigration and Customs Enforcement (ICE) used social media posts as a reason to deny someone entry into the U.S.

These threats, and others posed by a techno-authoritarian regime, are vastly different from those presented by a corporate monopolistic regime—and different yet again in a society where both are working together. Contending with these new threats requires a different approach to personal digital devices, cloud services, social media, and data in general.

What Data Does the Government Already Have?

For years, most public attention has centered on the risks of tech companies gathering behavioral data. This is an enormous amount of data, generally used to predict and influence consumers’ future behavior—rather than as a means of uncovering our past. Although commercial data is highly intimate—such as knowledge of your precise location over the course of a year, or the contents of every Facebook post you have ever created—it’s not the same thing as tax returns, police records, unemployment insurance applications, or medical history.

The U.S. government holds extensive data about everyone living inside its borders, some of it very sensitive—and there’s not much that can be done about it. This information consists largely of facts that people are legally obligated to tell the government. The IRS has a lot of very sensitive data about personal finances. The Treasury Department has data about any money received from the government. The Office of Personnel Management has an enormous amount of detailed information about government employees—including the very personal form required to get a security clearance. The Census Bureau possesses vast data about everyone living in the U.S., including, for example, a database of real estate ownership in the country. The Department of Defense and the Bureau of Veterans Affairs have data about present and former members of the military, the Department of Homeland Security has travel information, and various agencies possess health records. And so on.

It is safe to assume that the government has—or will soon have—access to all of this government data. This sounds like a tautology, but in the past, the U.S. government largely followed the many laws limiting how those databases were used, especially regarding how they were shared, combined, and correlated. Under the second Trump administration, this no longer seems to be the case.

Augmenting Government Data with Corporate Data

The mechanisms of corporate surveillance haven’t gone away. Compute technology is constantly spying on its users—and that data is being used to influence us. Companies like Google and Meta are vast surveillance machines, and they use that data to fuel advertising. A smartphone is a portable surveillance device, constantly recording things like location and communication. Cars, and many other Internet of Things devices, do the same. Credit card companies, health insurers, internet retailers, and social media sites all have detailed data about you—and there is a vast industry that buys and sells this intimate data.

This isn’t news. What’s different in a techno-authoritarian regime is that this data is also shared with the government, either as a paid service or as demanded by local law. Amazon shares Ring doorbell data with the police. Flock, a company that collects license plate data from cars around the country, shares data with the police as well. And just as Chinese corporations share user data with the government and companies like Verizon shared calling records with the National Security Agency (NSA) after the Sept. 11 terrorist attacks, an authoritarian government will use this data as well.

Personal Targeting Using Data

The government has vast capabilities for targeted surveillance, both technically and legally. If a high-level figure is targeted by name, it is almost certain that the government can access their data. The government will use its investigatory powers to the fullest: It will go through government data, remotely hack phones and computers, spy on communications, and raid a home. It will compel third parties, like banks, cell providers, email providers, cloud storage services, and social media companies, to turn over data. To the extent those companies keep backups, the government will even be able to obtain deleted data.

This data can be used for prosecution—possibly selectively. This has been made evident in recent weeks, as the Trump administration personally targeted perceived enemies for “mortgage fraud.” This was a clear example of weaponization of data. Given all the data the government requires people to divulge, there will be something there to prosecute.

Although alarming, this sort of targeted attack doesn’t scale. As vast as the government’s information is and as powerful as its capabilities are, they are not infinite. They can be deployed against only a limited number of people. And most people will never be that high on the priorities list.

The Risks of Mass Surveillance

Mass surveillance is surveillance without specific targets. For most people, this is where the primary risks lie. Even if we’re not targeted by name, personal data could raise red flags, drawing unwanted scrutiny.

The risks here are twofold. First, mass surveillance could be used to single out people to harass or arrest: when they cross the border, show up at immigration hearings, attend a protest, are stopped by the police for speeding, or just as they’re living their normal lives. Second, mass surveillance could be used to threaten or blackmail. In the first case, the government is using that database to find a plausible excuse for its actions. In the second, it is looking for an actual infraction that it could selectively prosecute—or not.

Mitigating these risks is difficult, because it would require not interacting with either the government or corporations in everyday life—and living in the woods without any electronics isn’t realistic for most of us. Additionally, this strategy protects only future information; it does nothing to protect the information generated in the past. That said, going back and scrubbing social media accounts and cloud storage does have some value. Whether it’s right for you depends on your personal situation.

Opportunistic Use of Data

Beyond data given to third parties—either corporations or the government—there is also data users keep in their possession.This data may be stored on personal devices such as computers and phones or, more likely today, in some cloud service and accessible from those devices. Here, the risks are different: Some authority could confiscate your device and look through it.

This is not just speculative. There are many stories of ICE agents examining people’s phones and computers when they attempt to enter the U.S.: their emails, contact lists, documents, photos, browser history, and social media posts.

There are several different defenses you can deploy, presented from least to most extreme. First, you can scrub devices of potentially incriminating information, either as a matter of course or before entering a higher-risk situation. Second, you could consider deleting—even temporarily—social media and other apps so that someone with access to a device doesn’t get access to those accounts—this includes your contacts list. If a phone is swept up in a government raid, your contacts become their next targets.

Third, you could choose not to carry your device with you at all, opting instead for a burner phone without contacts, email access, and accounts, or go electronics-free entirely. This may sound extreme—and getting it right is hard—but I know many people today who have stripped-down computers and sanitized phones for international travel. At the same time, there are also stories of people being denied entry to the U.S. because they are carrying what is obviously a burner phone—or no phone at all.

Encryption Isn’t a Magic Bullet—But Use It Anyway

Encryption protects your data while it’s not being used, and your devices when they’re turned off. This doesn’t help if a border agent forces you to turn on your phone and computer. And it doesn’t protect metadata, which needs to be unencrypted for the system to function. This metadata can be extremely valuable. For example, Signal, WhatsApp, and iMessage all encrypt the contents of your text messages—the data—but information about who you are texting and when must remain unencrypted.

Also, if the NSA wants access to someone’s phone, it can get it. Encryption is no help against that sort of sophisticated targeted attack. But, again, most of us aren’t that important and even the NSA can target only so many people. What encryption safeguards against is mass surveillance.

I recommend Signal for text messages above all other apps. But if you are in a country where having Signal on a device is in itself incriminating, then use WhatsApp. Signal is better, but everyone has WhatsApp installed on their phones, so it doesn’t raise the same suspicion. Also, it’s a no-brainer to turn on your computer’s built-in encryption: BitLocker for Windows and FileVault for Macs.

On the subject of data and metadata, it’s worth noting that data poisoning doesn’t help nearly as much as you might think. That is, it doesn’t do much good to add hundreds of random strangers to an address book or bogus internet searches to a browser history to hide the real ones. Modern analysis tools can see through all of that.

Shifting Risks of Decentralization

This notion of individual targeting, and the inability of the government to do that at scale, starts to fail as the authoritarian system becomes more decentralized. After all, if repression comes from the top, it affects only senior government officials and people who people in power personally dislike. If it comes from the bottom, it affects everybody. But decentralization looks much like the events playing out with ICE harassing, detaining, and disappearing people—everyone has to fear it.

This can go much further. Imagine there is a government official assigned to your neighborhood, or your block, or your apartment building. It’s worth that person’s time to scrutinize everybody’s social media posts, email, and chat logs. For anyone in that situation, limiting what you do online is the only defense.

Being Innocent Won’t Protect You

This is vital to understand. Surveillance systems and sorting algorithms make mistakes. This is apparent in the fact that we are routinely served advertisements for products that don’t interest us at all. Those mistakes are relatively harmless—who cares about a poorly targeted ad?—but a similar mistake at an immigration hearing can get someone deported.

An authoritarian government doesn’t care. Mistakes are a feature and not a bug of authoritarian surveillance. If ICE targets only people it can go after legally, then everyone knows whether or not they need to fear ICE. If ICE occasionally makes mistakes by arresting Americans and deporting innocents, then everyone has to fear it. This is by design.

Effective Opposition Requires Being Online

For most people, phones are an essential part of daily life. If you leave yours at home when you attend a protest, you won’t be able to film police violence. Or coordinate with your friends and figure out where to meet. Or use a navigation app to get to the protest in the first place.

Threat modeling is all about trade-offs. Understanding yours depends not only on the technology and its capabilities but also on your personal goals. Are you trying to keep your head down and survive—or get out? Are you wanting to protest legally? Are you doing more, maybe throwing sand into the gears of an authoritarian government, or even engaging in active resistance? The more you are doing, the more technology you need—and the more technology will be used against you. There are no simple answers, only choices.

Read the whole story
GaryBIshop
18 days ago
reply
Mistakes are a feature. Great insight.
Share this story
Delete

You can now upgrade a retro Casio watch with Bluetooth, step tracking, and games

1 Comment
A Casio F-91W digital watch next to the Ollee Watch One replacement mainboard.
The Ollee Watch One makes several Casio digital watches smarter. | Image: Ollee Watch

For the first time since it launched 36 years ago, the iconic Casio F-91W digital watch is getting some smart upgrades — but not from Casio. The Ollee Watch One is a replacement mainboard for the Casio F-91W, A158W, and several other watch models that can be installed without the need for a soldering iron or any electronics know-how aside from being able to remove a few screws and clips. It doesn’t upgrade Casio’s timepieces to the point where they’re on-par with the Apple Watch, but it does add Bluetooth and app connectivity so the watch can set the time automatically, track step counts, play games, and be more personalized using an improved color-changing backlight.

The DIY version of the Ollee Watch one, which requires you to already have one of six different supported Casio watch models (including the F-84W, A159W, A171W, and W-59) is available now for $54.99. If you’re not confident in tackling the upgrade yourself, which appears to take around six minutes, the company also sells the Casio F-91W-1 and A158W-1 preinstalled with the upgrade for $99.99, but it’s currently sold out.

The upgraded mainboard includes an accelerometer for step counting and  an upgraded audio buzzer that will alert you when you’ve reached a goal you set using an accompanying iOS and Android mobile app. The upgrade also adds a temperature sensor, an improved LED backlight that lets you adjust the color or use your watch as a flashlight, world time functionality, and the option to create five timer presets.

A game called Ping being played on a Casio F-91W digital watch.

There’s no notifications, and while the Ollie Watch One does include alternate watch faces, it doesn’t upgrade the basic segmented screen that Casio includes. That being said, even with those screen limitations you can play a simplified version of Blackjack or a game called Ping that has you controlling a paddle to keep a bouncing ball up in the air. Battery life isn’t quite as impressive as the seven years Casio claims for an unmodified F91W-1, but thanks to its use of Bluetooth low energy the upgrade kit only reduces battery life to about 10 months on a CR2016 battery.

Read the whole story
GaryBIshop
20 days ago
reply
Cool!
Share this story
Delete

Meta’s Ray-Ban Display Glasses and the New Glassholes

1 Comment

It’s becoming somewhat of a running gag that any device or object will be made ‘smart’ these days, whether it’s a phone, TV, refrigerator, home thermostat, headphones or glasses. This generally means somehow cramming a computer, display, camera and other components into the unsuspecting device, with the overarching goal of somehow making it more useful to the user and not impacting its basic functionality.

Although smart phones and smart TVs have been readily embraced, smart glasses have always been a bit of a tough sell. Part of the problem here is of course that most people do not generally wear glasses, between people whose vision does not require correction and those who wear e.g. contact lenses. This means that the market for smart glasses isn’t immediately obvious. Does it target people who wear glasses anyway, people who wear sunglasses a lot, or will this basically move a smart phone’s functionality to your face?

Smart glasses also raise many privacy concerns, as their cameras and microphones may be recording at any given time, which can be unnerving to people. When Google launched their Google Glass smart glasses, this led to the coining of the term ‘glasshole‘ for people who refuse to follow perceived proper smart glasses etiquette.

Defining Smart Glasses

Meta's Ray-Ban Display smart glasses with its wristband. (Credit: Meta)
Meta’s Ray-Ban Display smart glasses with its wristband. (Credit: Meta)

Most smart glasses are shaped like rather chubby, often thick-rimmed glasses. This is to accommodate the miniaturized computer, battery and generally a bunch of cameras and microphones. Generally some kind of projection system is used to either project a translucent display on one of the glasses, or in more extreme cases a laser directly projects the image into your retina. The control interface can range from a smartphone app to touch controls, to the new ‘Neural Band’ wristband that’s part of Meta’s collaboration with Ray-Ban in a package that some might call rather dorky.

This particular device crams a 600 x 600 pixel color display into the right lens, along with six microphones and a 12 MP camera in addition to stereo speakers. Rather than an all-encompassing display or an augmented-reality experience, this is more of a display that you reportedly see floating when you glance somewhat to your right, taking up 20 degrees of said right eyepiece.

Perhaps most interesting is the neural band here, which uses electromyography (EMG) to detect the motion of muscles in your wrist by their electrical signals to determine the motion that you made with your arm and hand. Purportedly you’ll be able to type this way too, but this feature is currently ‘in beta’.

Slow March Of Progress

Loïc Le Meur showing off the Google Glass Explorer Edition in 2013. (Credit: Loïc Le Meur)
Loïc Le Meur showing off the Google Glass Explorer Edition in 2013. (Credit: Loïc Le Meur)

When we compare these Ray-Ban Display smart glasses to 2013’s Google Glass, when the Explorer Edition was made available in limited quantities to the public, it is undeniable that the processor guts in the Ray-Bans are more powerful, it’s got double the Flash storage, but the RAM is the same 2 GB, albeit faster LPRDDR4x. In terms of the display it’s slightly higher resolution and probably slightly better fidelity, but this still has to be tested.

Both have similar touch controls on the right side for basic control, with apparently the new wristband being the major innovation here. This just comes with the minor issue of now having to wear another wrist-mounted gadget that requires regular charging. If you are already someone who wears a smart watch or similar, then you better have some space on your other wrist to wear it.

One of the things that Google Glass and similar solutions have really struggled with – including Apple’s Vision AR gadget – is that of practical use cases. As cool as it can be to have a little head-mounted display that you can glance at surreptitiously, with nobody else around you being able to glance at the naughty cat pictures or personal emails currently being displayed, this never was a use case that convinced people into buying their own Google Glass device.

In the case of Meta’s smart glasses, they seem to bank on Meta AI integration, along with real-time captions for conversations in foreign languages. Awkward point here is of course that none of these features are impossible with a run-of-the-mill smartphone, and those can do even more, with a much larger display.

Ditto with the on-screen map navigation, which overlays a Meta Maps view akin to that of Google’s and Apple’s solutions to help you find your way. Although this might seem cool, you will still want to whip out your phone when you have to ask a friendly local when said route navigation feature inevitably goes sideways.

Amidst the scrambling for a raison d’être for smart glasses, it seems unlikely that society’s attitude towards ‘glassholes’ has changed either.

Welcome To The Panopticon

Example of a panopticon design in the prison buildings at Presidio Modelo, Isla de la Juventud, Cuba. (Credit: Friman, Wikimedia)
Example of a panopticon design in the prison buildings at Presidio Modelo, Isla de la Juventud, Cuba. (Credit: Friman, Wikimedia)

The idea behind the panopticon design, as created by Jeremy Bentham in the 18th century, is that a single person can keep an eye on a large number of individuals, all of whom cannot be certain that they are or are not being observed at that very moment. Although Bentham did not intent for it to be solely used with prisons and similar buildings, this is where it found the most uptake. Inspired by this design, we got more modern takes, such as the Telescreens in Orwell’s novel Nineteen-Eighty Four whose cameras are always on, but you can not be sure that someone is watching that particular screen.

In today’s modern era where cameras are basically everywhere, from CCTV cameras on and inside buildings, to doorbells and the personal surveillance devices we call ‘smartphones’, we also got areas where people are less appreciative of having cameras aimed on them. Unlike a smartphone where it’s rather obvious when someone is recording or taking photos, smart glasses aren’t necessarily that obvious. Although some do light up a LED or such, it’s easy to miss this sign.

In that article a TikTok video is described by a woman who was distraught to see that the person at the wax salon that she had an appointment at was wearing smart glasses. Unless you’re actively looking at and listening for the cues emitted by that particular brand of smart glasses, you may not know whether your waxing session isn’t being recorded in glorious full-HD or better for later sharing.

This is a concern that blew up during the years that Google Glass was being pushed by Google, and so far it doesn’t appear that people’s opinions on this have changed at all. Which makes it even more awkward when those smart glasses are your only prescription glasses that you have on you at the time. Do you still take them off when you enter a place where photography and filming is forbidden?

Dumber Smart Glasses

Although most of the focus in the media and elsewhere is on smart glasses like Google Glass and now Meta/Ray-Ban’s offerings, there are others too that fall under this umbrella term. Certain auto-darkening sunglasses are called ‘smart glasses’, while others are designed to act more like portable screens that are used with a laptop or other computer system. Then there are the augmented- and mixed-reality glasses, which come in a wide variety of forms and shapes. None of these are the camera-equipped types that we discussed here, of course, and thus do not carry the same stigma.

Whether Meta’s attempt where Google Glass failed will be more successful remains to be seen. If the criteria is that a ‘smart’ version of a device enhances it, then it’s hard to argue that a smart phone isn’t much more than just a cellular phone. At the same time the ‘why’ for cramming a screen and computer into a set of dorky glasses remains much harder to answer.

Feel free to sound off in the comments if you have a good use case for smart glasses. Ditto if you would totally purchase or have already purchased a version of the Ray-Ban Display smart glasses. Inquisitive minds would like to know whether this might be Google Glass’ redemption arch.

Read the whole story
GaryBIshop
22 days ago
reply
I agree and one more thing. The worst part of glasses is them sliding down your nose. Smart glasses are only going to be worse.
Share this story
Delete

Who First Invented the Ball Pit? Two Young Ikea Designers, in 1970

1 Comment

In 1968, Charlotte Rude and Hjördis Olsson-Une were two young graduates of Sweden's Konstfack University, where they had studied art and design. The two moved to Älmhult, Sweden, where a glassworks was located; having caught the material exploration bug at design school, the two wanted to experiment with glass.

The pair had very little money and shared an apartment. To furnish it, they bought inexpensive leftover particle board from a nearby construction site, and built pieces they designed themselves.

Älmhult was also where Ikea had opened their first store, back in 1958, and was where their headquarters were based. Some of Rude and Olsson-Une's neighbors worked for Ikea, and when they saw the designers' self-created furniture, encouraged them to try to sell the designs to Ikea. The neighbors introduced them to Ikea manager Gillis Lundgren.

Lundgren didn't buy their designs. Instead he offered them full-time jobs. The duo accepted, and soon began designing furniture.

In 1970s, the Ikea store in Stockholm suffered a fire and needed to be rebuilt. The brass decided they may as well add a large play area for children. The task of designing the play area fell to Rude and Olsson-Une, who were then Ikea's youngest designers. The two designed a sort of indoor playground filled with pillows, ropes and platforms.

But "the biggest star of the play area," according to Ikea, "was the ball pit."

The idea came when Charlotte and Hjördis were opening boxes containing a new type of protective packing material. It consisted of small plastic balls, and reminded Charlotte of the then wildly popular bean bags that kids would sit and lie down in.

They imagined how one could jump and swim around in a sea of balls if it was deep and wide enough. However, these balls were too small. Children might put them in their mouths and choke. At the time, plastic was growing fast as a material, and the young designers soon found lighter and bigger multicoloured balls to use.

A huge wooden box – deep enough to jump into – was built with particleboard and filled with thousands of balls. When the play area opened in 1971, the ball pit became a huge hit. Kids loved it, and families were soon rushing to IKEA to let their children try out the new invention. Later in the 1970s, ball pits of all sorts started popping up in everything from amusement parks to fast-food restaurants, but the ball pit in the Stockholm IKEA store in 1971 was likely the first one in the world.

Rude and Olsson-Une didn't stay at Ikea for long; after just five years, in 1973, they moved to Copenhagen to pursue graduate studies at the Royal Danish Academy of Fine Arts. They took on freelance design work--and apparently faded into obscurity. I can find no record of their work after their time at Ikea.



Read the whole story
GaryBIshop
27 days ago
reply
Neat story!
Share this story
Delete
Next Page of Stories