Tech

SALTZMAN: New headset for the blind replicates a ‘guide dog’

The annual AWS re:Invent conference just wrapped up at the Venetian Convention and Expo Center in Las Vegas, where more than 60,000 attendees – including yours truly – traveled to learn about the latest in cloud computing and artificial intelligence, powered by Amazon Web Services.

Oh, in case you’re not aware, AWS is the biggest cloud computing platform in the world, with an estimated 31% market share, compared to Microsoft’s Azure at 20% and Google Cloud at 11% (The Big Three), according to Synergy Research Group, citing the latest 2024 data.

While there are hundreds of separate services a business or government agency can take advantage of, “cloud computing” simply refers to relying on remote servers – complex computer systems on the internet – to store, manage, and process your information, opposed to running your operations on a local server or a personal computer. Cloud computing is cost-effective and scalable, and generally more secure than traditional computing.

Now in its 13th year, AWS re:Invent features keynote announcements, training and certification opportunities, technical sessions, an exhibitor fair, and after-hours events.

Yes, we geeks like to party, too.

‘Amazon Nova’ enters the AI arms race

As you might expect, generative AI (gen AI) remains the primary buzzword among businesses today, as it could empower companies to do much more with less: streamlining customer service (including much smarter chatbots), generating free-to-use images and videos for a website or social media posts, writing marketing copy, analyzing sales reports, conducting market research, offers personalized production recommendations to customers, and summarizing meetings with actionable items – just to name a few applications.

See also  Federal government announces creation of National Space Council

AWS re:Invent kicked off with Amazon CEO Andy Jassy unveiling Amazon Nova, a new and “groundbreaking” set of AI models designed to efficiently and accurately handle a variety of tasks involving text, images, and videos. For example, the AI can generate content, like a video, or understand the content inside of a video (or document or audio file).

Jassy also highlighted Nova’s affordability, claiming the models are up to 75% less expensive than competitors, which could help level the playing field for smaller organizations on tighter budgets.

Wearable device is like a digital ‘guide dog’ for the blind

Advertisement 3This advertisement has not loaded yet, but your article continues below.

Nova aside, the most impressive demos at AWS re:Invent were tied to accessibility.

For one, .lumen is a company that makes wearable glasses that mimic the core features of a guide dog. Resembling a VR headset, the user could navigate around city streets by relying on a combination of audio cues and haptic (vibration) feedback – based on what the headset “sees” – to guide the wearer not unlike pulling their hand in a given direction as a guide dog would.

Along with feeling the cues for obstacles and directions, the wearer hears feedback, such as “curb up” or “stairs down,” as well as info tied to street crossings, escalators, park benches, pedestrians, and more. The headset processes info up to 100 times per second. Users can give the glasses voice commands to navigate to specific places. Maps are downloaded to be used offline to preserve battery life, which is a “full day” of moderate use, or about 2.5 hours of continuous walking, Amariei said.

See also  Canada Day 2023: Your guide to what's open and closed in the Halifax area

If you need more, the user can plug in a battery pack.

Amariei was excited to share the news .lumen is working on integrating public transportation features.

“So, you just put a location, which may be far away, and the device will take you to the bus station, help you get onto the right bus and help you get off of it – it’s pretty incredible,” he said.

The headset is coming out in Europe early next year.

“As for costs, subsidies and reimbursement programs will vary by country, but we’re working to make this either free or at a very low cost,” Amariei said.

‘Sign-Speak’ leverages AI to help deaf individuals

More than 430 million people worldwide are deaf or hard of hearing, and currently there are two ways to communicate with a non-deaf person: texting or through an interpreter. Texting is limited and interpreters can be cost prohibitive.

Sign-Speak adds a third option.

Powered by AWS, the startup has developed AI technology that recognizes American Sign Language (ASL) and translates it into spoken words – and vice versa.

So, imagine you’re on a video call with someone deaf or hard of hearing and when you speak, an avatar, of sorts, will sign what you say to a deaf person in near real-time, and when they sign back, captured by a webcam, you’ll hear a human voice translate the signing.

“Our goal is really making sure deaf people get access everywhere,” Sign-Speak CEO Yami Payano explained at AWS re:Invent.

While the primary focus is on online video communication between employees, such as a platform like Microsoft Teams or Zoom, Payano said they’re also developing a free tool for smartphones for in-person communication.

See also  Iceland rushes to protect power plant over fears of volcanic eruption

“Yes, so, just as we’re now both in the same room, we could also communicate effectively with Sign-Speak, with the phone as the virtual interpreter,” she said.

“Right now, our system is equal to the latency of an interpreter, a few seconds, which deaf people are used to, since ASL and English are two completely different languages,” Payano added.

Expect a 2025 rollout, including social media integration, which is also being tested.

– Marc Saltzman is the host of the Tech It Out podcast and the author of the book, Apple Vision Pro For Dummies (Wiley)

Related Articles

Leave a Reply

Back to top button