Here are a few of the exhibitors I had an opportunity to meet with on the second day of this year’s show.
At Eureka Park, EAIGLE introduced an AI-based, comprehensive “all-in-one” kiosk with contactless visitor management, automated wellness screening, vaccine verification, people counting, capacity management, and crowd thermal screening.
Essence Security, one of the largest global alarm system solution providers, won two CES Innovation Awards with their MyShield 5G-connected Smoke Generator and Umbrella 5G-connected Personal Safety Alert Device, capable of directly connecting to public safety answering points (PSAPs) and central station monitoring providers. Be sure to check out this article from Security Business magazine Editor-in-Chief Paul Rothman to learn more about Essence’s Umbrella solution and its expansion into commercial enterprise security deployments.
If you’re driving an Audi S7 Sportback, BMW M760Li xDrive, or 2021 Cadillac Escalade, you’re already using InfiRay’s uncooled IR sensors for Driver Assisted Guidance. At CES, the company unveiled the first 8μm-uncooled thermal camera sensor, which could have wide-reaching industry potential for things like body worn cameras.
For the security and identification market, Isorg demonstrated its Fingerprint-on-Display (FoD) modules for improved fingerprint smartphone authentication and improved dry finger performance under harsh conditions. The sensor modules support FAP30 and FAP 60; up to four fingers simultaneously touching a smartphone display. Four-finger sensors scan four fingers on each hand followed by the thumbs (4-4-2). Each 10-print profile produces a complete record without stitching, which is why these scanners are the fastest option and the FBI’s preferred choice for enrollment. Four-finger scans also deliver increased accuracy for identification operations.
Isorg’s next step is to present these innovations as a trusted partner for smartphone and security solution providers involved with mobile banking, border control, first responders, and electronic access control.
CES 2022 became a living catalog of AI accelerators, Systems on Chip (SoCs), and for some, like Femtosense, an introduction of an emerging category – the hyper-efficient AI processor for the embedded edge, also known as a Sparse Processing Unit (SPU).
About six years ago, when coding for ADAS was getting deployed into luxury vehicles to assist owners in parking, saving lives by braking early and avoiding people on reverse motion and recognizing approaching objects faster than driver reaction time, emphasis seemed to be on completing stable projects, even if it meant millions of lines of code. Today’s vehicles may require over 100 million lines, complex structure and a Dense Neural Network. Higher density means greater processing power, leading to cost, heat, power and ultimately, for an electric vehicle, less mileage.
For a 911 operator trying to hear if the call is an unarmed domestic dispute or involves gun violence, having an SPU with efficient code that recognizes multiple people speaking amid background noise can mean the difference between the wrong response team dispatched or lives saved.
Femtosense AI, with their SPU, was able to demonstrate speech recognition amid the extremely noisy trade show environment, resulting in a playback of unaltered speech and background noise excised in real time. Legacy noise cancellation technologies are still sold today in consumer electronics and they are closer to sound suppression than preserving audio frequencies.
With AI-based video processing with objects moving in different directions amid complex buildings on fire, like that in a riot, video evidence may not be rendered accurately. With the Femtosense AI SPU, parameter and activation sparsity may reduce power requirements 100-times and memory used by 10-times.
In addition to the use of ultra-efficient neural network processors, Femtosense AI provides everything required to go from neural network model to SPU, tasks usually done by a solution provider that may not be as familiar with the processor development.
Visual Behavior works with companies like Waymo and NVIDIA, creating types of robotics perception, including Automated Guided Vehicles (AGV), Advanced Driver Assistance Systems (ADAS) and Unmanned Aerial Vehicles (UAVs).
Visual Behavior uses a remarkable new paradigm centered around a scene representation rather than on the sensors. This is an internal, persistent, symbolic representation of the world which is continuously updated. Their core technology is an Artificial Visual Cortex, an AI-powered software for scene comprehension.
Use cases include improved driver safety in poor environmental conditions, better multiple obstacle avoidance and even object tracking among similar objects.
About the Author:
Steve Surfaro is Chairman of the Public Safety Working Group for the Security Industry Association (SIA) and has more than 30 years of security industry experience. He is a subject matter expert in smart cities and buildings, cybersecurity, forensic video, data science, command center design and first responder technologies. Follow him on Twitter, @stevesurf.