Worlds

WORLDS SPATIAL INTELLIGENCE PLATFORM

A SUBSIDIARY OF HYPERGIANT SENSORY SCIENCES

***

EXTENDED REALITY MODELS FOR ACTIVE PHYSICAL ANALYTICS

A camera-first system using computer vision to give machines the capability to see and sense the world like human’s do, making video the next green field for data.

SS SUBSIDIARY 00001

WORLDS PLATFORM - OVERVIEW

HYPERGIANT SENSORY SCIENCES
SUBSIDIARY NO. 00001

ENHANCING THE HUMAN EXPERIENCE

In a world where the proliferation of cameras and sensors has exploded, the Hypergiant Sensory Sciences team recognized an urgent need for a real breakthrough in the ability of machines to persistently observe and learn from physical environments. We also seeks to deliver sensory perception at scale where machines can not only see and sense the world, but re-play it, re-mix it and even allow us to re-program it.

VISUALIZING ARTIFICIAL INTELLIGENCE

One of the most powerful manifestations of Artificial Intelligence might be when you can physically see AI in action with your own eyes. The Worlds.IO Spatial Intelligence Platform does just that using 'active physical analytics'. It basically brings to life all the real-time (sometimes unseen) sensory data from a physical environment into a single 3D view that is easy to comprehend and puts extra focus on anything else you might want some additional eyes watching. In the industry, these 3D copies are known as digital twins, but we call them ‘LIVE SCENES’ since the Worlds.IO rendering engine does more by continually creating and updating scenes on the fly using live video feeds from cameras and computer vision.

IoT SENSORS UNITED AS ONE

Think of SCENES more as a visual home base for your entire IoT sensor network data, so it can also display things such as heat, moisture, sound, smell or any other important data you might want to see. All in one place, as a single view. Therefore the user interface and experience is more like a video game of the real world scene, meaning we can enhance the way information is visualized and make it exponentially better than what you might actual see in the real environment. Our software provides people with a fundamentally new way of engaging and interacting virtually with their environments.

LEARN MORE AT WORLDS.IO

VISIT THE WORLDS PLATFORM WEBSITE FOR MORE INFORMATION

HYPERGIANT SENSORY SCIENCES
SUBSIDIARY NO. 00001

WWW.WORLDS.IO

HUMAN PERCEPTION AT UNIMAGINABLE SCALE.

“A real-time 3D (or spatial) map of the world will be the single most important software infrastructure in computing.”

- PETER DIAMANDIS, FOUNDER XPRIZE

SOLVING PROBLEMS BEFORE THEY BECOME PROBLEMS

THROUGH AUGMENTED SPACIAL INTELLIGENCE

The Worlds platform can also go back and fourth in time, capturing both the present and the past as interactive 4D models. Using deep learning and other AI techniques, this platform essentially give organizations the ultimate ability to persistently observe, analyze and learn from their physical surroundings in new ways that were otherwise impossible. Thus the mission: develop and implement sensory perception at scale powered by machines that can see, sense and alert us when important things happen in our environments.

By 2022, 95% of video/image content will never be viewed by humans, but instead will be vetted by machines that can provide some degree of automated analysis.
– Gartner

Worlds

O.M.I. OFFICIAL DOCUMENT

HYPERGIANT WORLDS COMMAND

HEADQUARTERS - DALLAS, TX
FIELD STATION

Worlds Command

Worlds, an independent software company spun out of Hypergiant Sensory Sciences, functions under the guidance veteran AI entrepreneurs Dave Copps and Chris Rohde. They are now leading a global team of entrepreneurs and data scientists with a proven ability to develop products that utilize machine learning and AI to solve complex, critical, problems.

In a world where the use of cameras and sensors has exploded, the Worlds team recognized an urgent need for a real breakthrough in the ability of machines to persistently observe and learn from physical environments. The company seeks to deliver sensory perception at scale where machines can not only see and sense the world, but re-play it, re-mix but even allow us to re-program it. Worlds' software will provide people with a fundamentally new way of engaging and interacting virtually with their environments.

DAVE COPPS

PRESENT

CEO / Co-founder of WORLDS / A Hypergiant Sensory Sciences Company

PAST

CEO / FOUNDER - Brainspace Corporation

Focused on the role that Machine Learning and AI will play in transforming markets and the world, Dave has founded, launched and sold three companies focused on machine learning and artificial intelligence. Previous to Hypergiant Sensory Sciences, Dave served as founder and CEO of Brainspace Corporation, a Cyxtera Business, a global leader in Investigative Analytics and Cyber Security. Brainspace continues to play a lead role in the area of augmenting human intelligence using data visualization to connect people more intimately with AI and surface important patterns and insights in data. Over the years, his companies have placed machine learning and AI in thousands of organizations, including all major consultancies and intelligence agencies, around the globe.

CHRIS ROHDE

PRESENT

COO / Co-founder of WORLDS / A Hypergiant Sensory Sciences Company

PAST

CEO / FOUNDER - BamAI

An avid AI and machine learning enthusiast and expert, Chris is committed to leveraging these technologies for the betterment of business everywhere. Prior to Hypergiant Sensory Sciences, Chris was founder and CEO of BamAI, which uses data science and design to deliver the best digital content experience. Chris got his start in machine learning as founder and VP of Business Development of Brainspace, where he brought the power of ML and data analysis and visualization to Fortune 500 enterprises, global consulting firms, legal service providers and government agencies. Chris earned his BBA from Texas Tech University.

HOW THE WORLDS PLATFORM WORKS

DECLASSIFIED DOCUMENT

HYPERGIANT SENSORY SCIENCE
SUBSIDIARY NO. 00001

SENSOR DATA

SENSOR DATA

We start with cameras and then move on to other IoT sensors. All supported in our API.

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE

Our own deep learning models built to identify people and things and learn the meaning of movement.

4D MODELS

4D MODELS

Our ultimate UX is a digital twin of the real world scene.

SIMPLIFIED CONCEPTUAL DATA VISUALIZATION

SCALE: EACH CUBE = 10 MILLION SENSORY DATA POINTS

HYPERGIANT SENSORY SCIENCES
SUBSIDIARY NO. 00001

SENSORY DATA

SENSORY DATA

Single unique spatial representations of data combine into sets, learned patterns and moments in time called Scenes

Arrow down

LEARNED INTELLIGENT DATA SETS

LEARNED INTELLIGENT DATA SETS

Live scenes use these sets of billions of data points captured through the movement of time to create a digital twin that can render intelligent sensory information at unprecedented scale, speed and accuracy for automation or live monitoring

Arrow down

WORLDS SPATIAL INTELLIGENCE

WORLDS SPATIAL INTELLIGENCE

Arrow down

CORE FEATURES AND FUNCTIONALITY

HYPERGIANT SENSORY SCIENCES
SUBSIDIARY NO. 00001

4D VISUALIZATION

4D VISUALIZATION

The spatial layer provides remote observation, monitoring, learning and safety capabilities from off-site locations via VR or spatial UI.

  • LIVE SCENES: Real-time digital twins of a sensor monitored physical environment.
  • 4D INTERFACE FOR IoT: Serves as home base for IoT sensors. Vision, heat, moisture, sound, smell can all be expressed within the 4D model.
  • SUPERPOWERS: Gives users enhanced abilities like x-ray vision to see through walls, floors and ceilings.

TRAINING

TRAINING

Subject matter experts load video of their unique environment and train the AI to learn the people and objects that make up the scene.

  • ACCELERATED LEARNING MODEL: Three tiers of transfer learning occur with the model. Human to machine, machine to machine and scene to scene.
  • NO DATA SCIENCE BACKGROUND REQUIRED: Training models use video that typically only takes 2-3 minutes each.

SPATIAL NAVIGATION

SPATIAL NAVIGATION

Spatial navigation allows users to teleport virtually from one location to another. This is an entirely new capability unique to Worlds.IO

  • SENSORY FUSION: Enables a Total Situational Awareness of events in the 4D model of a Live Scene.
  • TELEPRESENCE CAPABILITIES: Users can fly around 
the model investigating from multiple perspectives inside and outside of physical structures.

AI ACTIONS

AI ACTIONS

The Actions layer enables clients to create AI’s that automate tasks for Story and Event Detection. Actions can be created to monitor anything in a scene, validate the identity of people, predict events and more. It also allows filtering of these people or objects in the scene to focus on specific targets or anomalies.

  • ADVANCED AI: Actions can go beyond object detection and the detection of movement to understanding a complex series of actions meaningful to a specific scene.
  • DEEP LEARNING MODELS: Are built to identify people and things and learn the meaning of movement.

IMAGE GALLERY

HYPERGIANT - OFFICE OF
MACHINE INTELLIGENCE

MARKET FOCUS

HYPERGIANT SENSORY SCIENCES
SUBSIDIARY NO. 00001

Our initial focus will be on critical infrastructure. Worlds enables companies to achieve advanced situational awareness through higher levels of automation, increased efficiency, improved productivity and dramatically lower surveillance costs. Whether it is a military base, an oil well or a hospital, our product will provide organizations with the ability to perceive and understand their environments in ways that were never possible before, including:

ENERGY

ENERGY

Observations and Automation of Energy site activities including remote site operation, identifying potential threats, tracking supply chain deliveries, safety inspections, remote inspections.

  • Customers train ML models using Worlds.IO base models to create virtual sensors for automation of remote site.
  • Detections from cameras and IOT devices are combined and projected into 3D space for remote monitoring and analysis.

MILITARY

MILITARY

Observations and Automation of Military/Intel site activities including base security, identification and analysis of potential threats, remote operations, situational awareness on the battlefield, emergency response.

  • Customers train ML models to identify potential threats and establish normal activity.
  • Detections from cameras and sensors are combined and projected into 3D space for monitoring and automation of situational awareness and response.

Hypergiant