A camera-first system using computer vision to give machines the capability to see and sense the world like human’s do, making video the next green field for data.
SS SUBSIDIARY 00001
In a world where the proliferation of cameras and sensors has exploded, the Hypergiant Sensory Sciences team recognized an urgent need for a real breakthrough in the ability of machines to persistently observe and learn from physical environments. We also seek to deliver sensory perception at scale where machines can not only see and sense the world, but re-play it, re-mix it and even allow us to re-program it.
One of the most powerful manifestations of Artificial Intelligence might be when you can physically see AI in action with your own eyes. The Worlds.IO Spatial Intelligence Platform does just that using 'active physical analytics'. It basically brings to life all the real-time (sometimes unseen) sensory data from a physical environment into a single 3D view that is easy to comprehend and puts extra focus on anything else you might want some additional eyes watching. In the industry, these 3D copies are known as digital twins, but we call them ‘LIVE SCENES’ since the Worlds.IO rendering engine does more by continually creating and updating scenes on the fly using live video feeds from cameras and computer vision.
Think of SCENES more as a visual home base for your entire IoT sensor network data, so it can also display things such as heat, moisture, sound, smell or any other important data you might want to see. All in one place, as a single view. Therefore the user interface and experience is more like a video game of the real world scene, meaning we can enhance the way information is visualized and make it exponentially better than what you might actual see in the real environment. Our software provides people with a fundamentally new way of engaging and interacting virtually with their environments.
The Worlds platform can also go back and forth in time, capturing both the present and the past as interactive 4D models. Using deep learning and other AI techniques, this platform essentially give organizations the ultimate ability to persistently observe, analyze and learn from their physical surroundings in new ways that were otherwise impossible. Thus the mission: develop and implement sensory perception at scale powered by machines that can see, sense and alert us when important things happen in our environments.
By 2022, 95% of video/image content will never be viewed by humans, but instead will be vetted by machines that can provide some degree of automated analysis.
Worlds, an independent software company spun out of Hypergiant Sensory Sciences, functions under the guidance veteran AI entrepreneurs Dave Copps and Chris Rohde. They are now leading a global team of entrepreneurs and data scientists with a proven ability to develop products that utilize machine learning and AI to solve complex, critical, problems.
In a world where the use of cameras and sensors has exploded, the Worlds team recognized an urgent need for a real breakthrough in the ability of machines to persistently observe and learn from physical environments. The company seeks to deliver sensory perception at scale where machines can not only see and sense the world, but re-play it, re-mix but even allow us to re-program it. Worlds' software will provide people with a fundamentally new way of engaging and interacting virtually with their environments.
CEO / Co-founder of WORLDS / A Hypergiant Sensory Sciences CompanyPAST
CEO / FOUNDER - Brainspace Corporation
COO / Co-founder of WORLDS / A Hypergiant Sensory Sciences CompanyPAST
CEO / FOUNDER - BamAI
We start with cameras and then move on to other IoT sensors. All supported in our API.
Our own deep learning models built to identify people and things and learn the meaning of movement.
Our ultimate UX is a digital twin of the real world scene.
Single unique spatial representations of data combine into sets, learned patterns and moments in time called Scenes
Live scenes use these sets of billions of data points captured through the movement of time to create a digital twin that can render intelligent sensory information at unprecedented scale, speed and accuracy for automation or live monitoring
The spatial layer provides remote observation, monitoring, learning and safety capabilities from off-site locations via VR or spatial UI.
Subject matter experts load video of their unique environment and train the AI to learn the people and objects that make up the scene.
Spatial navigation allows users to teleport virtually from one location to another. This is an entirely new capability unique to Worlds.IO
The Actions layer enables clients to create AI’s that automate tasks for Story and Event Detection. Actions can be created to monitor anything in a scene, validate the identity of people, predict events and more. It also allows filtering of these people or objects in the scene to focus on specific targets or anomalies.
Our initial focus will be on critical infrastructure. Worlds enables companies to achieve advanced situational awareness through higher levels of automation, increased efficiency, improved productivity and dramatically lower surveillance costs. Whether it is a military base, an oil well or a hospital, our product will provide organizations with the ability to perceive and understand their environments in ways that were never possible before, including:
Observations and Automation of Energy site activities including remote site operation, identifying potential threats, tracking supply chain deliveries, safety inspections, remote inspections.
Observations and Automation of Military/Intel site activities including base security, identification and analysis of potential threats, remote operations, situational awareness on the battlefield, emergency response.