Seemingly overnight, Artificial Intelligence (AI) has touched almost every part of our lives. However, for those working with AI for decades, its penetration has been gradual. Now, another technology called computer vision, a cousin to AI, is quickly taking a similar path to immerse itself into our lives.
Computer vision can understand objects and the relationship between each object. As a result, its intelligence makes a dramatic difference when multi-family housing residents retrieve packages that couriers drop off. Using computer vision, Position Imaging has created a Smart Package Room that understands what item sitting on a shelf belongs to a resident. It’s a welcome innovation that simplifies a courier’s job and allows residents to retrieve their packages 24/7.
Ned Hill, CEO of Position Imaging, shares with Spiceworks (formally Tookbox.com) how this latest digital life convenience came into existence and how it will continually impact everything from supply chain management to perishable food deliveries. Titled “How AI and Computer Vision Shape Our World” and posted initially in Spiceworks, this article is an insightful read discussing a complicated concept in simple terms.
How AI and Computer Vision Shape Our World
By Ned Hill, Position Imaging
There was a time when the idea of robots capable of thought was only a Hollywood plot device used in blockbuster films like 2001: A Space Odyssey, Terminator, Blade Runner, and The Matrix to keep us engaged. But true Artificial Intelligence or AI is now becoming fact over fiction, much like Hollywood movies depicting humanity walking on the moon in the 1950s. However, computing capabilities need to mature first.
Past and Present Capabilities
The first software to approximate human-like skills and problem-solving was under development in the mid-1950s. As the concept of AI evolved over twenty years, the problem wasn’t programming but rather a lack of computer storage holding it back. To develop, AI needed higher processing speeds.
Using the increasing processing power and data storage, AI engineers began to further the field of research to include supercomputers such as IndianaUniversity’s Big Red 200, which has 672 compute nodes. Each has 256 GB of memory and two AMD EPYC 7742 processors with 2.25 GHz and 225 watts.
Alternatively, the HiPerGator 3.0 from the University of Florida has 240 AMD EPYC Rome workstations with a RAM of 1024GB and more than 30,000 cores and 150 AMD EPYC Milan machines with 512GB of RAM and more than 19,000 cores in total. So now is the ideal time to make AI mainstream.
Applications requiring more computing power than most devices can manage today are great candidates for AI systems. For instance, healthcare, banking, insurance, and manufacturing are just a few industries currently utilizing machine learning to provide insights that would be impossible or take too long to obtain without AI.
AI Gets The Hype, But Computer Vision Has Insights
While AI gets all the hype, businesses are now using thousands of computer vision applications to expedite or automate numerous processes in fields like healthcare, where computer vision can spot cancer from CT images faster than doctors can.
Computer vision is a branch of AI that employs data to detect and recognize objects seen through a computer’s camera. This cousin of AI recognizes objects and their environmental conditions through a camera lens and gives the computer a digital grasp of its surroundings to interact with objects. For example:
- Individuals can be uniquely identified in highly secure situations using fingerprint and retinal scanning to grant or restrict access.
- Problems in wind turbines can now be seen in advance using autonomous drone footage with high-definition cameras.
- Computer vision applications help track packages or products through the supply chain.
Computer Vision’s Practicality In Everyday Living
We often notice the practical computer vision applications first regarding life efficiencies. Some organizations, for example, use computer vision technology to assist multi-family property managers in automating the package handling process. In addition, they use it to redirect workers to manage residents rather than deliver or sort items received from couriers.
Residents will benefit from the application in this scenario because they will no longer have to wait for personnel to pick up parcels. In addition, couriers benefit from multi-family computer vision applications since they can deliver packages directly to a smart package room. The computer vision technology in this intelligent package room virtually monitors and tags the location of each box.
Besides multi-family housing, logistics companies employ computer vision to audit package measurements passing through their hubs. This computer vision allows senders to assess package dimensions correctly before delivering them. Logistics firms may improve the customer experience and lower costs by automating the human operation of measuring packages. Py-tesseract, for example, is a tool that allows for data extraction from images.
These computer vision efficiencies become increasingly crucial as AI applications migrate to operate drones, mobile devices, and vehicles. Qualcomm’s AI accelerator architecture in the Hexagon 780 Processor is furthering AI research. In addition, its Gauge Equivariant Convolutional Neural Networks are improving the detection skills of computer vision systems.
Qualcomm’s neural networks increase curved shape detection, allowing computer vision to recognize item dimensions better. This work will undoubtedly contribute to the adoption of AI in daily gadgets and IoT networks and the performance of computer vision applications.
Conclusion
Unlike the drama depicted in most Hollywood films, people will continue to embrace AI and its computer vision relative, to perform commonplace and even complex duties for us, such as driving. And the trickle-down impact of these billion-dollar academic supercomputers will be realized through improved life efficiencies.
Improved life efficiencies from these intelligent applications are all around us. For example, Netflix suggests the best movies to watch. Amazon can deliver goods overnight from a smart package room that knows to whom a box belongs. The promise of AI and computer vision is practicality. It will continue to reveal itself in subtle and dramatic ways for a long time.
Ned Hill is the founder and CEO of Position Imaging (PI), a pioneer in the field of advanced tracking technologies. Under Ned’s strategic vision and guidance, PI has developed an industry leading tracking solution, utilized computer vision and laser guidance to simplify item delivery, and created unique AI-based technologies. These combine to improve logistics efficiency and continuous visibility to items at any stage in the process. Ned has raised close to 20 million in funding, driven product development, and created a partner ecosystem of industry leaders in hardware (Hitachi-LG Data Storage, Intel), software (Microsoft, Salesforce), solutions (Zebra, Lozier), and service (Bell and Howell). Ned is the inventor or co-inventor of over 50 patents/patent applications and a speaker at industry conferences including CES, Live Free and Start, and at MIT.