Diagram of FrykenScope incident analysis integrating fixed sensors, mobile assets, and environmental analysis results

Pat-System and Method for Event Identification and Reconstruction Using Coordinated Data from Heterogeneous Sensors with Topographical Analysis.

The FrykenScope™ patent application presents a novel approach to event reconstruction using coordinated sensor data and topographical analysis. For a broader perspective, see the FrykenScope™ Overview in our Beyond Eyewitnesses series and the blog.

The patent drawings are placed at the bottom of the document. Explore our mission and research focus on the About page.

1. Title of the Invention

2. Technical Field

The present invention relates generally to event identification and reconstruction. More specifically, it pertains to a system and method for identifying events within defined time intervals and geographical areas, utilizing data from a network of heterogeneous sensors.

The system and method are configured to identify events where the location of the sensors is known or determinable at the time of data capture. This is achieved by collecting, storing, and analyzing data from a network of heterogeneous sensors, comprising both fixed sensors (e.g., cameras in or on buildings) and mobile sensors (e.g., cameras in vehicles and mobile devices). A key aspect of the invention involves the use of topographical data to refine event location, assess sensor relevance, and enhance event reconstruction.

3. Background of the Invention and Prior Art

The accurate and efficient identification of events, particularly incidents relevant to law enforcement, is often challenged by the limitations of eyewitness testimony. Eyewitness accounts can be unreliable or unavailable, leading to difficulties in reconstructing events.

Traditional event identification methods heavily depend on fixed camera surveillance. These systems offer limited geographical coverage and often necessitate time-consuming manual retrieval and analysis of video footage. While mobile recording devices (e.g., cameras in vehicles and smartphones) are increasingly common, current systems lack effective integration to aggregate and utilize data from these disparate sources in conjunction with fixed surveillance. This prevents a holistic view of events.

Existing GPS and geofencing technologies define geographical boundaries and trigger alerts based on device location. However, they do not provide a solution for the systematic collection, correlation, and analysis of multi-source sensor data within those boundaries for comprehensive event identification and reconstruction.

An event, in the context of this invention, can encompass various scenarios requiring investigation, such as for legal, environmental, or financial reasons. These include, but are not limited to, alarms from personal safety devices, assaults, robberies, burglaries, suspected criminal activities, and collisions.

While systems exist for general event detection and data visualization, such as those described in US Patent No. 9,298,678 B2, these typically focus on specific data types or lack the comprehensive integration of heterogeneous fixed and mobile sensors with topographical analysis for real-world incident reconstruction.

Prior art systems, such as the portable apparatus and method for real-time automated multisensor data fusion and analysis disclosed in US Patent Publication No. 2018/0239991 A1, demonstrate efforts to integrate data from multiple sensors. However, these systems often do not adequately incorporate topographical data to refine sensor relevance or provide dynamic search area adjustments crucial for complex event reconstruction in varied environments.

The application of artificial intelligence to sensor data for analysis and improved performance, as seen in systems for data synthesis in autonomous control (e.g., US Patent No. 10,678,244 B2), highlights the general trend of AI integration. Nevertheless, these solutions do not typically focus on the systematic correlation of diverse fixed and mobile sensor data with topographical insights for comprehensive retrospective event identification and tracking.

Early advancements in sensor networking, such as the internetworked wireless integrated network sensor (WINS) nodes described in US Patent No. 6,859,831 B1, established the concept of interconnected sensor arrays. However, these systems primarily address data collection from fixed or less dynamic sensor networks and lack the adaptive capabilities and multi-source integration critical for current event identification needs.

While multi-sensor systems, often vehicle-mounted, exist for specific applications like object detection and collision avoidance (e.g., US Patent No. 6,771,208 B2), they do not provide a comprehensive framework for systematically combining data from a diverse array of fixed and mobile sensors, including personal devices, to reconstruct complex events across varying geographical and temporal contexts.

Geofencing utilizes GPS and IP addresses to create virtual boundaries and track devices within specific areas. A-GPS combines network data and GPS for faster location calculation compared to GPS alone. GNSS systems offer high precision, with accuracy within a few centimeters for short distances. WAAS technology in North America enables positioning with an accuracy of approximately 3 meters 95% of the time. Android devices can monitor up to 100 active geofences per application and user, with support for entry, exit, and stay events within designated areas. Geofencing is broadly categorized into two types: active (utilizing GPS and associated with high battery consumption) and passive (operating as a background process without continuous GPS reliance).

Furthermore, current systems typically fail to adequately account for environmental factors that may obstruct sensor data. For example, buildings or terrain can limit a camera’s field of view.

Therefore, a need exists for an improved system and method that:

  • Efficiently collects and correlates data from heterogeneous fixed and mobile sensors.
  • Automatically identifies relevant sensor data based on geographical location, time, and topographical considerations.
  • Provides comprehensive event reconstruction and analysis.

4. Summary of the Invention

The present invention is a system and method designed to overcome the limitations of existing event identification techniques. It achieves efficient and comprehensive event identification and reconstruction through the coordinated collection and analysis of data from a network of heterogeneous fixed and mobile sensors, including cameras. The system can also be integrated with modern Police Dispatch Centers, enhancing event identification, dispute resolution, and criminal prosecution support.

A central control unit is configured to receive and process sensor data, including location coordinates, timestamps, and optionally sensor orientation, to determine sensor activity within a defined geographical area and time interval. A timestamp refers to any temporal indicator associated with sensor-acquired data, representing either a discrete moment or a defined interval during which the data was captured.

The system’s novel features enable detailed event reconstruction, identification of involved parties and objects, recording aggressive or suspected criminal behavior, and the ability to track movement, even in topographically complex environments.

In at least one embodiment, the timestamp for a spatiotemporal reference comprises any temporal and/or geospatial indicator associated with sensor-acquired data, such as a timestamp denoting a specific moment in time or a time interval during which the data was captured and optionally including geolocation coordinates indicating the location of acquisition.

In at least one embodiment, the sensor data entries are tagged with spatiotemporal metadata, including but not limited to a timestamp denoting the moment or interval of acquisition, and corresponding geolocation coordinates representing the position of the sensor at the time of capture.

In at least one embodiment, the system comprises the use of data from sensors that register specifically searched airborne particles in a specific amount, which can affect health or be useful for locating the origin of the substance. These sensors can include odor sensors, smoke sensors, and sensors designed to detect living and deceased organisms such as injured animals or decaying organic matter. Appropriate sensors may be employed to detect illegally classified drugs, substances used in warfare, and hidden narcotics in natural environments, vehicles, or buildings. Each person has a unique scent, which allows dogs to track individuals. These scent compounds can be detected and identified using specialized sensors and computer programs designed for this purpose, and used to track suspects, identify individuals, or store the type and amount of particles in the odorant from convicted persons in the system’s storage unit. All sensors that measure airborne particles are known as Airborne Particle Sensors.

In at least one embodiment, the system is designed to detect narcotics or other hazardous substances in wastewater by deploying sensors that analyze water for specific compounds. These sensors are strategically positioned to enable tracing the source of the detected substances and are used in conjunction with other sensors for identification of individuals in the area at specific times.

In at least one embodiment, the system is designed to support the reconstruction of suspected criminal activity, such as the sale of narcotics and endangered animals or similar illicit activities. This is achieved through the collection and storage of photographic evidence, for example, of sellers and buyers with the product or animal, for later identification with the aid of Artificial Intelligence (AI). Such data can also be utilized later in the event of an incident reported to the police.

In at least one embodiment, the system is designed to support the reconstruction of events and identification of involved parties or objects by utilizing collected and stored data from multiple fixed and mobile sensors within the specific geographical area and time of the event, with the goal of enhancing crime resolution, disagreements and emergency response times.

In at least one embodiment, the system is designed to track a person or object in real-time or retrospectively. This capability can be utilized to locate stolen vehicles, identify gang members, monitor drug dealers, or copper thieves. The system enables authorities to trace the whereabouts and interactions with handlers and purchasers of illicit substances and stolen items.

In at least one embodiment, the system is designed to facilitate the identification of stolen objects or wanted individuals within a limited area by prompting the activation of mobile sensors. Collected images and videos from these sensors are processed with AI to recognize faces, license plates, or objects. When a person or object is identified and in motion, their likely coordinates are continuously calculated based on the direction of movement, speed, and timestamp.

In at least one embodiment, the system can expand the search area and activate additional sensors if the central control unit’s activated sensors lose direct contact with the fleeing object. Additionally, known locations that facilitate hiding or blending into crowds can be taken into consideration.

In at least one embodiment, the system is designed to retrieve information from a topographic map of a specified event location. The system determines which sensors were present in the vicinity during the specified time point or time interval and identifies each sensor’s location on the map. With a specific time point means a definite moment, while a time interval, timeframe or period indicates a prolonged event or uncertain timing. Additionally, it assesses whether these sensors had a clear line of sight to the estimated location of the event at any point during the given period. This is feasible because topographic maps store the dimensions and distribution of all fixed objects, including the ground layout, with precise coordinates. By mapping the event coordinates along with those of the identified sensors, particularly image sensors that were positioned close enough to capture pertinent data, the system analyzes their locations on the map and evaluates whether there was an unobstructed line between the event and the sensors at any moment during the occurrence.

In at least one embodiment, the system is designed to determine whether, and what, objects or features exist between two or more points on a topographic map, a spatial analysis is conducted using topographic and thematic data encoded in the map. This process begins by georeferencing the two points of interest within the map’s coordinate system. The line connecting them—referred to as a transect or profile line—is then analyzed for intersecting symbols or contour data that indicate natural or manufactured features. These may include elevation changes (interpreted through contour line intervals), hydrographic elements (such as rivers, lakes, wetlands), vegetation boundaries, infrastructure (roads, buildings), or land use classifications. By interpreting the symbology and intervals in accordance with cartographic standards, a qualitative and quantitative assessment can be made of both the presence and characteristics of any intervening features.

In at least one embodiment, the system comprising: a central control unit configured to: receive sensor data from a plurality of heterogeneous sensors, the sensor data including location coordinates and timestamps; in response to an event report specifying a geographical area and a time point or time interval, identify a subset of the heterogeneous sensors that were active within the geographical area during the specified time, wherein the identification of the subset of heterogeneous sensors comprises analyzing topographical data to determine a relevance of individual heterogeneous sensors of the subset; retrieve the sensor data from the identified subset of heterogeneous sensors; and analyze the retrieved sensor data to reconstruct the event; and a data storage unit communicatively coupled to the central control unit, for storing the sensor data.

In at least one embodiment, the system for identifying events, comprising the method to: receiving, at a central control unit, sensor data from a plurality of heterogeneous sensors, the sensor data including location coordinates and timestamps; in response to an event report specifying a geographical area and a time point or time interval, identifying, using the central control unit, a subset of the heterogeneous sensors that were active within the geographical area during the specified time, wherein the identifying of the subset of the heterogeneous sensors comprises analyzing, using the central control unit, topographical data to determine a relevance of individual heterogeneous sensors of the subset; retrieving, using the central control unit, the sensor data from the identified subset of heterogeneous sensors; and analyzing, using the central control unit, the retrieved sensor data to reconstruct the event.

In at least one embodiment, the system is designed for dynamically adjusting, using the central control unit, the geographical area based on topographical data to track a movement of an object or individual.

In at least one embodiment, the system is designed to transmitting, using the central control unit, a request to activate mobile heterogeneous sensors within a specified geographical area in response to an event report.

In at least one embodiment, the system is configured to receive and analyze a request or event report transmitted from mobile heterogeneous sensors to the central control unit.

In at least one embodiment, the system comprising: a user interface configured to display: a map showing locations of the heterogeneous sensors and an event area; and real-time video feeds from a selection of the heterogeneous sensors.

In at least one embodiment, the prompt to activate mobile sensors, such as cameras, may include activation in a specific direction, considering the camera’s geographical location in relation to the event that is desired to be monitored during a specific time or time interval.

5. Brief Description of the Drawings

Figure 1: System Architecture Diagram

  • Illustrates the system’s main components: the central control unit, fixed and mobile sensors of various types, the communication network facilitating data exchange, and the data storage unit/database.
  • The diagram highlights the flow of data and control signals between these components.
  • Corresponds to the system architecture described in Section 6.1 and supports claims 1 and 5 (the core system and method).

Figure 2: Data Flow Diagram

  • Depicts the sequence of data processing within the system, from receiving an event report to generating an output.
  • The diagram emphasizes the transformation of raw sensor data into analyzed and stored information.
  • Corresponds to the data flow described in Section 6.3 and supports claims 1, 5, and 7.

Figure 3: Event Reconstruction Process Flowchart

  • Details the logical steps taken by the system to reconstruct an event, starting from receiving an event report and proceeding through sensor data retrieval and analysis, with a decision point for determining if further data is needed.
  • Corresponds to the event reconstruction process described in Section 6.4 and supports claims 1 and 5 (the core method).

Figure 4: Sensor Activation Diagram

  • Illustrates the process by which the central control unit transmits activation requests to mobile sensors, potentially including notifications to users, and the option for manual user activation.
  • Corresponds to the system activation and control mechanisms described in Section 6.6 and supports claim 8.

Figure 5: Tracking Diagram

  • Illustrates a schematic example of how the central control unit follows a moving object from an event point.

Figure 6: Tracking Process Diagram

  • Depicts the system’s dynamic tracking of a moving object, showing the use of timestamped location data from mobile sensors to estimate the object’s trajectory and the corresponding adjustment of the search area.
  • Corresponds to the object tracking process described in Section 6.5 and supports claims 7 and 10.

Figure 7: User Interface Example

  • Presents an example user interface displaying a map with sensor locations, the event area, real-time video feeds, sensor lists, and user controls for functions such as search, tracking, and zoom.
  • Supports the overall system functionality and data output as described in Section 6.8.

Figure 8: Analysis Process Diagram

  • Illustrates the odor/airborne substance analysis process.

6. Detailed Description of the Invention

For clarity and ease of understanding, it should be noted that certain components may be assigned different reference numerals in different drawings. These differences are merely for illustrative purposes and do not indicate different structures unless explicitly stated. Accordingly, where applicable, components bearing different reference numerals but described similarly are to be understood as representing the same or corresponding elements.

6.1 System Overview

The present invention is a system and method designed to provide authorized entities, such as law enforcement, with efficient access to data from a network of heterogeneous sensors. This access facilitates the investigation and reconstruction of events occurring within defined geographical areas and time intervals. In this context, “events” encompass occurrences of interest to the authorized entity, including but not limited to criminal incidents, traffic accidents, natural disasters, and situations requiring verification of a vehicle’s or individual’s location at a specific time. And the “event” can be given a specific geographical location or area and adapted to the situation, and here also referred to as Event Point and which is advantageously the center of a suspected or definite event.

All information collected regarding each event, commonly referred to as an incident, at the alleged location or area during a specified time serves as a basis for investigating the event. This data may include details about the nature and known extent of the event, urgent assistance needs, descriptions of any individuals or objects involved, known directions of flight or travel, means of transport, speed, specific behaviors, and other relevant factors. Such information is currently gathered during alarm center notifications and varies depending on the type and severity of the alarm. Information from various sources, with or without images, is utilized to generate a representation of the targeted individuals or objects. This data is input into a computer program, which then searches for these individuals or objects in the information gathered from fixed and mobile sensors within the search area.

The operator can assist the computer program when needed by specifying details for the search, such as a specific individual or object on the display, facial features and color, body structure, movement patterns, type and color of clothing, vehicle type and color, license plate numbers, and other identifiable and searchable characteristics, limited by the available sensors in the specified area. The operator can also program the system to follow an object, shown in the image on the display, forward or backward in time.

The system comprises a central control unit that communicates with and receives data from a network of connected sensors. This network includes both fixed sensors (e.g., cameras, microphones, airborne particle sensors) and mobile sensors (e.g., cameras and location-tracking devices in vehicles and personal communication devices). The central control unit is configured to:

  • Identify which sensors were active within a defined geographical area and specified time.
  • Retrieve relevant data from those sensors regarding searched events, people, or objects.
  • Analyze the collected data to reconstruct the event.

Event reports are received at the central control unit, specifying the event’s location using geographical coordinates or an equivalent spatial identifier. The system uses this location information to define an initial search area for identifying potentially relevant sensors. This search area’s boundaries can be refined using data from a digital topographic map.

Users of mobile sensor devices can configure permissions to control data access by the central control unit. These configurations may include:

  • Enabling or disabling data transmission.
  • Specifying whether the central control unit can access location coordinates only during active data collection or continuously.
  • Specifying whether the central control unit can contact them to ask if they have noticed anything with sight, hearing, or smell when their location data shows that they have been close to an event.

A key feature of the invention is the system’s ability to account for topographical limitations when determining sensor relevance. The system analyzes digital topographic map data to identify potential obstructions (e.g., buildings, terrain features) that may impede a sensor’s ability to capture data relevant to the event location.

The system can dynamically adjust the geographical search area for relevant sensors. Adjustments can be based on factors such as:

  • The initial search area’s size (e.g., expanding if insufficient data is retrieved).
  • The topographical complexity of the area.
  • The need to track a moving object or individual.

Example of Computational Analysis for Identifying the Presence of Structures or Features Between Two Points on a Topographic Map.

  1. Map Ingestion and Preprocessing
    • If the map is in raster format (e.g., PNG, TIFF), it is first georeferenced and processed using Optical Character Recognition (OCR) and feature recognition algorithms.
    • Machine learning models or manual digitization convert visual symbols (e.g. contour lines, roads, rivers) into vector layers with spatial and semantic attributes.
  2. Coordinate Definition
    • The two user-defined points (A and B) are transformed into the coordinate reference system (CRS) of the map, such as WGS84 or UTM.
    • If GPS data is available, it can be used to enhance positional accuracy.
  3. Generation of a Profile Line
    • A straight or geodetically accurate polyline is generated between Point A and Point B.
    • This serves as the basis for spatial queries and topographic profiling.
  4. Spatial Intersection Analysis
    • The profile line is intersected with all available vector map layers:
      • Contour Lines: Each intersection is used to interpolate elevation and compute slope and terrain variation.
      • Hydrological Features: Rivers, lakes, and streams are identified where the line intersects hydro layers.
      • Land Use & Cover: Cross-referencing thematic layers reveals vegetation zones, urban areas, or agricultural land.
      • Manufactured Structures: Roads, trails, and buildings are extracted from relevant infrastructure layers.
  5. Topographic Profile Extraction (if elevation data available)
    • If a Digital Elevation Model (DEM) is available, elevation values are sampled at regular intervals along the profile line.
    • This generates a vertical cross-section (elevation profile) showing changes in terrain between the points.
  6. Semantic Annotation
    • Only sensors that have a clear line of sight between points A and B are shown in the first selection of available sensors.
    • Sensors can also be displayed based on a set maximum distance to the event point or area of interest.
    • The sensors and intersected features are annotated with metadata (such as type, name, attributes) and organized in traversal order from the Event Point (A) or following specifically searched information, such as specific type of sensors or the ranking of moving objects with sensors.
    • The result can be structured into tabular form, descriptive text, or 3D visualization for advanced applications.
  7. Optional: Path Optimization or Visibility Analysis
    • In some applications (e.g. line-of-sight, route planning), further analyses are performed:
      • Line-of-sight (LOS) checks visibility over terrain.
      • Path-finding algorithms (e.g. A*, Dijkstra) explore optimal traversable routes.

When tracking a moving object, the system estimates the object’s mode of travel, direction, and speed using data from several sensors over time, in combination with a topographic map of the area. This can be used to calculate one or more possible escape routes from the last identified location of the object. This allows the system to predict the object’s location even if it moves outside the initial search area and collects data from sensors in these new areas. The system can conduct this type of search for any time interval in the future or past, depending on the duration for which sensor data is stored.

The direction and speed of an object’s movement in an image or film can be determined through various methods, including:

  • Using a fixed camera with its direction known in the system, where the area is fully or partially visible, and where AI processes the image to calculate movements, speeds, and directions of objects.
  • Using moving cameras where the recordings, in addition to the timestamp, include the camera’s geographical direction.
  • One approach involves identifying buildings, streets, fences, or similar landmarks by comparing the data collected from sensors of the objects’ surroundings with a topographic map. This allows for determining the position and movement of the object in relation to the identified surroundings, which can be expressed in directions and degrees.
  • Additionally, the speed can be calculated in the case of moving films or multiple photographs by measuring the distance the object has moved relative to the surroundings between each timestamped image. The speed can also be calculated from moving film or multiple taken photos by calculating the change in the size of the object between each timestamped image, where the change in side from a straight line can be used if there is a need to optimize the calculation.
  • The direction of a camera during image acquisition can be defined by its orientation relative to a fixed reference frame and is determined using inertial and magnetic sensor data. This may involve an Inertial Measurement Unit (IMU) that provides measurements from gyroscopes (angular velocity), accelerometers (linear acceleration and gravity vector), and optionally magnetometers (Earth’s magnetic field). These data streams are processed through sensor fusion algorithms—such as Kalman filters or complementary filters—to estimate the camera’s attitude in 3D space (yaw, pitch, roll). This orientation can then be expressed as Euler angles, quaternions, or a rotation matrix, depending on the application. The estimated orientation is timestamped and synchronized with the image to allow geospatial or directional interpretation of the scene.

All people move in slightly diverse ways, seen at a detailed level, which can be used to identify people with the help of AI and a gait analysis. With a high-resolution camera, detailed data of eyes, skin, hair, and clothing details can also be used for identification. The AI is also configured to calculate the size of an object. This can be done by calculating how much space the object covers with known magnification if the distance is known, or by comparing it with the immediate surroundings where the size of fixed objects can be retrieved from the topographic map. Movable objects, such as specific vehicles, can have their sizes retrieved from computer programs with stored data about that specific vehicle. The distance to an object can be calculated by comparing the size of the object in relation to the size of known structures in the vicinity of the object.

6.2 Sensor Types and Data Input

The system utilizes data from a combination of fixed and mobile sensors to provide comprehensive event information, including eyewitness accounts when available.

Fixed sensors are deployed in stationary locations to continuously or intermittently monitor designated areas (as described in FIG. 1). These sensors can include, but are not limited to:

  • Cameras (video and still image) installed in buildings, on utility poles, or within infrastructure.
  • Microphones placed strategically for audio capture.
  • Airborne Particle Sensors designed to detect specific airborne substances such as odor sensors.
  • Automated wastewater analysis sensors for analyzing water for specific compounds.

Fixed sensors are configured to provide data that includes:

  • Location coordinates.
  • Timestamps.
  • Optionally, sensor orientation (e.g., camera direction).

Mobile sensors are integrated into movable platforms to capture data as they traverse an area (as described in FIG. 1). These sensors can include, but are not limited to:

  • Cameras mounted on vehicles (e.g., cars, trucks, buses, motorcycles, boats, trains, helicopters, e-scooters, drones, and satellites).
  • Cameras and location-tracking devices within personal communication devices (e.g., mobile phones, body-worn cameras).

Mobile sensors are configured to provide data that includes:

  • Location coordinates.
  • Timestamps.
  • Optionally, microphone and camera orientation.

Users of mobile sensor devices can configure settings to control data transmission to the central control unit, including enabling/disabling transmission and specifying location data access permissions (continuous or event-triggered).

Each reported or detected event is assigned an event code, such as coordinates and timestamp, as well as a code for the type of event according to relevant guidelines (e.g., police guidelines). If multiple objects from the same event move in different directions, they can be tracked similarly and assigned individual identifiers (e.g., symbols, numbers, or letters).

In at least one embodiment, the direction of motion of an escape route is determined using collected data. This process involves utilizing cameras with stored coordinates, timestamps, and direction to calculate the object’s trajectory and movement per unit of time within the image that covers known coordinates. The direction of movement can be specified, for example, by degrees or by indicating the name of the street and direction.

In at least one embodiment, laser, or LIDAR (Light Detection and Ranging) technology is used to determine the direction of movement or velocity of objects from or to an event.

In at least one embodiment, the system is used for identification of individuals or vehicles by utilizing AI to compare collected data with stored information on wanted persons or vehicles stolen. Upon detecting a match, the system alerts authorities and activates sensors to track the individual or vehicle in real time until law enforcement intervenes.

In at least one embodiment, sensors are included to detect and analyze airborne substances. These sensors can identify:

  • Combustion products (e.g., fire smoke, gunpowder residue).
  • Controlled substances (e.g., narcotics).
  • Chemicals and gases (particularly those relevant to safety or security concerns).

In at least one embodiment, the system utilizes odor sensors and other airborne particle detectors too:

  • Locate fires.
  • Detect the presence of living or deceased beings (humans or animals).
  • Detect concentrations of substances that can affect people’s health and general condition, such as those causing allergic reactions and breathing difficulties.
  • Locate controlled substances.

In at least one embodiment, infrared cameras are used to:

  • Identify objects or individuals with temperature differentials relative to their surroundings. Examples include locating fleeing suspects, vehicles with recently operated engines, and injured animals.

In at least one embodiment, directional microphones are deployed (e.g., in protective cases on vehicle roofs) to capture targeted audio information, such as conversations within other vehicles.

In at least one embodiment, data from sensors on a pursuing vehicle is used to automatically determine and display the position, direction, and speed of a fleeing vehicle. This information can be transmitted to other units.

In at least one embodiment, fixed or vehicle-mounted sensors include body scanning or millimeter-wave scanning technology to detect concealed objects. This technology uses millimeter waves to penetrate clothing and create detailed images of the body’s surface. These sensors can be used by law enforcement to identify individuals carrying weapons.

Data transmitted from sensors to the central control unit includes information sufficient to determine the sensor’s location and time of activation. This data may consist of:

  • Full sensor data (e.g., image/video, audio, sensor readings).
  • Or, to minimize data transmission, location coordinates and timestamps.

In at least one embodiment, sensor data is stored locally on the sensor’s connected computing unit for a predetermined period (e.g., 48 hours) and transmitted to the central control unit only when deemed relevant to a specific event. All data exchanged between sensors, and the central control unit is encrypted to protect against unauthorized access and to preserve device anonymity. The central control unit maintains sole knowledge of the association between sensor identifiers and the entities controlling those sensors.

6.3 Data Processing and Analysis

Upon receiving an event report, the central control unit initiates data processing to identify relevant sensors. This process involves:

  • Identifying as precisely as possible what information is being sought, such as the identification of an event, individual or object, a license plate, or similar details.
  • Determining the geographical coordinates and time interval specified in the event report.
  • Identifying fixed and mobile sensors active within those coordinates and that time interval.

The system prioritizes the retrieval of data from fixed sensors (e.g., cameras, microphones) as an initial step. If necessary, the system expands the search to include mobile sensors and/or a broader geographical area to gather supplementary information. The system can prioritize the acquisition of various types of mobile sensors and their carriers. For instance, it may rank emergency vehicles, security companies, taxis, private vehicles, and personal sensors in a specific order. The system can be configured to restrict data retrieval from mobile sensors to designated coordinates, such as a city or district with high crime rates.

The retrieved sensor data, which may include images, video, audio, and location information, is analyzed to reconstruct the event. Analysis of data from specialized sensors may involve:

  • Triangulation of sound or odor sources using data from multiple sensors. This triangulation considers factors such as sound volume or airborne particle density.
  • Incorporation of environmental data, such as local weather conditions (e.g., wind direction and speed), to refine the analysis.
  • Analysis of odor sensor data to determine the type and concentration of airborne substances. This information can be used to assess potential hazards and trigger alerts.

Software algorithms are employed to facilitate data analysis. These algorithms can include, but are not limited to:

  • Object recognition.
  • Facial recognition.
  • License plate recognition.
  • Vehicle identification (e.g., by shape, color, or acoustic signature).

A key aspect of the invention is the use of software algorithms to analyze a digital topographic map or a Geographic Information System (GIS) map to assess event proximity and sensor relevance. These algorithms identify potential sensor locations with a view of the event, considering factors such as:

  • Elevation differences between the event location and sensor locations.
  • Potential obstructions (e.g., buildings, walls, terrain features) that may block a sensor’s line of sight.

In at least one embodiment, software algorithms are used to calculate probable escape routes from an event location. Parameters for this calculation may include:

  • Known direction of travel.
  • Characteristics of individuals involved.
  • Physical layout of the location.
  • Proximity to hiding places or transportation options.
  • Type and severity of the event (which may indicate a planned escape).
  • Identifying people who were within sight of the event, and who may be witnesses.

This escape route information is then used to guide the search for relevant sensor data. AI algorithms can be used to track identified individuals across multiple sensors.

If the initial data analysis is insufficient to provide a clear understanding of the event, the system can iteratively expand the search area or time interval to gather additional data. Subject to user permissions and privacy settings, the central control unit may activate mobile sensors within a defined area to collect real-time data in response to an event.

6.4 Event Reconstruction and Output

The system utilizes retrieved sensor data to reconstruct events and provide comprehensive output to authorized users. This process may involve:

  • Integrating supplementary data from external sources, such as vehicle registration databases or address databases, to enhance the identification of persons or objects.
  • Analyzing audio data from sensors to extract information such as speech content or acoustic signatures (e.g., specific person or engine sounds) for improved identification.
  • Analyzing data from airborne substance sensors to identify and characterize detected substances.
  • Employing artificial intelligence (AI) algorithms to enhance the analysis of visual, audio, and other sensor data. AI techniques, including machine learning, can improve the accuracy and efficiency of identification and event reconstruction.

The system is configured to reconstruct events by tracing the movement of objects or individuals of interest. This is achieved by iteratively expanding the search area and/or time interval, using location and timestamp data to follow the subject’s trajectory before, during, and after the event.

6.5 Data Storage and Retrieval

Sensor data, including location coordinates and timestamps, is stored either:

  • Locally on the sensor device.
  • Or transmitted in real-time to the central control unit.

In response to an event, the system retrieves relevant data by:

  • Initially querying the data storage unit for information from fixed sensors active within the specified geographical coordinates and time interval.
  • If the fixed sensor data is insufficient, identifying mobile sensors present within the area and time interval and retrieving their data.
  • Data retrieved from mobile sensors may include images, video, audio, and location information.

In at least one embodiment, mobile sensor devices analyze their locally stored data upon receiving a query from the central control unit, transmitting only data deemed relevant to the event.

The system can also identify and flag potential witnesses to an event based on sensor location data. This functionality enables authorized personnel to follow up with individuals who may have valuable information. The system can also obtain data from entities that track the location of connected devices, such as mobile phones linked to telecommunications companies, to identify witnesses or suspects when required. The system can retrieve information from entities that monitor connected devices, which are consistently assessed but lack direct access to sensors. This includes devices such as rented bicycles and motorized scooters, which can serve as witnesses to events.

6.6 System Activation and Control

The central control unit can initiate actions to gather data in response to an event, including:

  • Transmitting a request to users of mobile sensors within a defined geographical area, prompting them to record and transmit data. This request may be delivered via text message, audio alert, or other notification mechanism.
  • Subject to user permissions and privacy settings, remotely activating mobile sensors within a specified geographical area following a critical event alert, including those in moving and parked vehicles.
  • Providing real-time notifications to users who have opted into continuous location data sharing, informing them of nearby events where their data may be relevant.

6.7 Compensation Mechanism

The system may include a compensation mechanism designed to incentivize users to contribute sensor data. This mechanism can provide users with compensation (e.g., a fixed fee or other reward) in exchange for providing data relevant to a specific event being investigated by the central control unit.

6.8 System Components

The system comprises the following key components (as illustrated in FIG. 1):

  • Sensors: A network of heterogeneous sensors, including:
    • Fixed sensors: Deployed in stationary locations (e.g., buildings, infrastructure) to monitor designated areas. Examples include cameras, microphones, and odor sensors.
    • Mobile sensors: Integrated into movable platforms such as vehicles, personal communication devices, and drones. Examples include cameras, location-tracking devices, etc.
  • Communication Network: A network infrastructure that facilitates data exchange between sensors and the central control unit. This network may utilize various technologies, including cellular networks, Wi-Fi, and wired connections.
  • Central Control Unit: A computing system that performs the following functions:
    • Receives event reports and defines search parameters (e.g., geographical coordinates, time interval).
    • Identifies relevant sensors based on search parameters and topographical data.
    • Cameras and other fast-mounted sensors can connect directly to the central control unit, similar to how police monitor high-risk areas today.
    • Retrieves and analyzes sensor data from identified sensors.
    • Controls sensor activation and communication (in some embodiments, subject to user permissions).
    • Generates output in a usable format (e.g., video displays, reports, maps).
    • Displays the event location on a digital topographic map or equivalent. This display may include symbols, letters, numbers, color, or combinations thereof, to indicate the placement of fixed and mobile sensors, and the possibility to show how these were moved before, during, and after the event by viewing stored data.
    • The collected data enables a monitoring operator to take appropriate actions when required, including disrupting or intensifying the search or providing specific information to emergency personnel.
  • Data Storage: A database or equivalent storage system for maintaining sensor data, location information, timestamps, and metadata.

6.9 Operational Steps

The system operates according to the following sequence of steps in response to receiving an event report (as depicted in FIG. 2 and FIG. 3):

  • Receive an event report specifying the event’s location and time interval.
  • To optimize the process, the operator can assist the AI unit during the training as needed for each step.
  • Define an initial geographical search area around the event location using geographical coordinates.
  • Identify fixed sensors located within the initial search area.
  • Retrieve and analyze data from the identified fixed sensors.
  • Determine if the data from fixed sensors is sufficient to reconstruct the event.
  • If the fixed sensor data is insufficient, identify mobile sensors present within the initial search area during the specified time interval.
  • Retrieve and analyze data from the identified mobile sensors.
  • If necessary, expand the geographical search area or time interval to gather additional data.
  • If object or individual tracking is required, dynamically adjust the search area based on timestamps and estimated movement patterns to follow the object’s trajectory.

6.10 Data Storage and Output

Following event analysis, relevant data (e.g., images, video, tracking information) is stored for evidentiary purposes and potential use in legal proceedings. The central control unit presents event information to authorized users through a user interface that may include:

  • A map display showing sensor locations and the event area (as depicted in FIG. 7).
  • Real-time monitoring capabilities (if available).
  • Tools for retrieving stored data.

The system incorporates topographical data (e.g., locations of buildings, bodies of water) to refine the search area for relevant sensors.

Detailed Description of Figures

Figure 1: Illustrates the system architecture. The central control unit (100) is connected to fixed sensors (101, 102, 103, 104), where the fixed sensors are directly connected according to common current practice and are usually monitored continuously. The central control unit (100) is, in this example, connected to a communication network (114) for synchronizing the collection of data from connected sensors and feedback messages, and the like. This includes mobile sensors (105, 106, 107, 108, 109, 110, 111, 112). The data collected from sensors are stored centrally in the data storage unit, here in the drawing called a database (113). Arrows indicate the flow of data and control signals, as in the example can go in both directions. The system allows for an almost unlimited number of connected sensors.

Figure 2: Illustrates the data flow within the system. The process begins at Start (200), with the receipt of an event report (201). The central control unit then receives the event report (202), identifies relevant sensors (203), retrieves sensor data (204), analyzes the data (205), stores the data (206), and generates an output (207), before terminating at End (208).

Figure 3: Illustrates the event reconstruction process. The process begins at ‘Start’ (300) and proceeds with receiving an event report (301). The system identifies fixed sensors in the area (302) and retrieves their data (303). A decision is made regarding whether the event is clear (304). If ‘Yes,’ the process ends (309). If ‘No,’ the system identifies mobile sensors (305), retrieves their data (306), analyzes the combined data (307), and expands the search area or time if needed (308) before ending (309).

Figure 4: Illustrates the sensor activation process. The central control unit (400) receives a report on an event and calculates specific coordinates of the event. It then seeks information about the event by sending an ‘Activation Request’ to the communication network (401). This results in a ‘Notification Alert’ being sent to a user’s mobile phone that is in the immediate vicinity of the incident, prompting them to activate their camera (412). The user (413) may also manually activate the camera or other sensors. A large number of various types of sensors may be connected to the Communication Network (401). Sensors can be selected depending on the type of information being sought and which are located at coordinates where said sensors could register the searched event. Examples of Mobile sensors can be (402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412) and transmit ‘Sensor Information’ back through the network. The different sensors can be configured to transmit the sensor’s location data continuously or when the user chooses. Fixed sensors (414) can be directly connected to control unit (400), for example, camera (415) for continuous monitoring.

Mobile sensors (402-412) may include, but are not limited to:

  • Video Camera
  • Mobile Camera
  • Helicopter-placed Infrared Camera
  • Odor Sensor (fixed or mobile)
  • Mobile Microphone (placed in a mobile telephone etc.)
  • Mobile Camera (in an Ambulance)
  • Boat Camera and Odor Sensor
  • Police Car with Millimeter-Wave Sensor, Microphone, Video Camera
  • Taxi Video Camera
  • Helmet Video Camera

Figure 5: Shows a schematic example of how the Central Control Unit (500) follows a moving Object (503) from an Event Point (502) shown on Timestamped Images (504) and how different fixed and moving sensors can capture the next sequence of events. Information from fixed and mobile sensors is obtained based on the calculated coordinates at which the object is located and is estimated to be if it continues in the same direction and speed, where the incident in the image was reported 4 minutes ago. In this example, the Communication Network (501) maintains contact with a number of sensors in the vicinity of the Event Point (502). The Central Control Unit (500) then selects the subset of the heterogeneous sensors that were/are activated during the time of the event and had a clear view of the event and/or the Object (503) during its movement. The sensors consist of a Video Camera (505) that transmits timestamped images to the Communication Network (501), a Camera (506) that transmits location data, a vehicle with Video Camera (507) that transmits location data, and a Camera (508) that transmits location data. A Video Camera (509) transmits continuously, and a vehicle with a Microphone (510) transmits location data. From all sensors that transmit location data, the Central Control Unit (500) can collect data to follow the fleeing subject and investigate the course of events. Furthermore, it is considered which sensors have a geographical position that enables the event or the fleeing person to be registered, and the information is stored with location and time stamp. The Control Unit (500) can at any stage expand the search as needed and is displayed on a Topographic Map (511) with the extended search area marked with, for example, lines (not shown). Each sensor is marked with its actual location on the topographic map with a specific symbol and/or color for each type of sensor (not shown).

Figure 6: Illustrates the object tracking process. Mobile sensors on vehicles (602, 603) are shown, where vehicle (602) are equipped with Camera sensors and a Microphone. The driver or the vehicle’s AI-implemented computer unit alerts about an Event (610) and sends timestamped images and Coordinates (611) to the Communication Network (601), which transmits the information to the central control unit (600). The Central Control Unit (600) decides that the event must be investigated and determines an initial Search Area (607), in this example, a circle of 100 meters in diameter. The search for the suspicious Object (606) is conducted with all data collected by vehicle (602) included in the search. However, no additional sensors beyond the one that triggered the alarm have been detected in this area. Consequently, a new Search Area (608) with a diameter of 200 meters has been designated for dynamically adjusting the search parameters. No new hits with activated sensors are found, but based on data from vehicle (602), the direction of escape was indicated, and a further Search Area (609) is determined. Here, a Fixed Camera (612) is located, whose collected data shows that Object (606) ran past with calculable direction and speed. For more information, data is obtained from sensors with location data enabled, where individuals with a Mobile camera (604) transmit ‘Location Data’ and ‘Timestamped Images’ (613) to the Communication Network (601), which relays this information to the Central Control Unit (600). Furthermore, location data shows that an individual with a Camera (605) is in the direction of escape of Object (606), where a request is sent to the individual to activate their camera (605). Additionally, video recording is obtained from the parked vehicle Camera (603) as the search area is expanded. The diagram shows the tracked Object (606) and its movements from the coordinates for the Event (610) with timestamps from each active sensor in the search area and the system’s dynamic expansion of Search Areas (608) and (609) to maintain tracking. Arrows from camera sensor to object show filming. Arrows from person sensors to Communication Network show Location Data and Timestamped images.

Figure 7: Shows an example user interface. The interface displays a System Display (700) after the system has been activated with the introduction of Event Coordinates (704). A Topographic Map (710) shows sensor locations and an ‘Event Location’ marker (701). Windows are provided for a ‘Video Feed’ (702) that displays the information stored on the sensor the operator or AI chooses to retrieve information from by interacting with the buttons. In the event of an active event, information from multiple sensors should be reviewed simultaneously via multiple Video Feed Windows (702). User control buttons (705, 706, 707, 708, 709) allow for actions such as ‘Start Search,’ ‘Track Object,’ ‘Zoom In,’ ‘Filter Sensors.’ and ‘Activate Sensors.’ The size of the search area around the event location (701) can be pre-programmed, set manually, or calculated automatically by the systems AI unit based on available sensors in the area with a view to the event location. The topographic map (710) of the event location shows where sensors whose coordinates indicate they were at the site at the time can be displayed with specific letters, numbers, symbols, color, or combinations thereof (shown here as symbols for fixed cameras and objects equipped with specific sensors). For quick identification to help the operator, the type of sensor can be displayed (here shown with symbols and number references where the sensors’ data can be retrieved by activation in the List of Sensors (703). Of course, this can be managed completely automatically by AI, but it will be required by the authorities that the system is monitored by operators during a transitional period. The ‘List of Sensors’ (703) shows available sensors (711 and 713) who had a clear view of the Event Location (701) and are within the selected size of the search area around the event location (701) and is shown after activation of the Start Search buttons (705). In this case, video and camera sensors are prioritized. The symbol O.B. (712) here represents known data about a fleeing suspicious object and the last known direction of movement. Sensors (714, 715 and 716) are in this example within the searched area, but do not have a clear view of the event location (701) and have therefore been placed under the Extended Area. When activating the Filter Sensor (708), it filters which sensors the data should be retrieved from, for example, prioritizing fixed sensors with a view towards the Event Location (701) in the first instance. If these do not provide sufficient information, data is retrieved from mobile sensors that have been within sight of the searched event. Here the operator can also choose the type of sensor, such as camera, odor, air particles, particles in liquid, or microphones.

Figure 8: Illustrates the airborne particle sensors substance analysis process after collection by a suitable organ. The process begins, in the example, with ‘Odor Sensor Input’ (800), representing the raw data received from the odor sensor. This data undergoes ‘Signal Processing’ (801) to prepare it for analysis. In the ‘Substance Identification’ step (802), the processed signal is compared to known substance signatures to identify the chemical composition of the detected substance. The ‘Concentration Measurement’ step (803) determines the quantity of the identified substance. ‘Data Correlation’ (804) integrates the substance information with other relevant data, such as location and time. A decision is made at ‘Threshold Exceeded?’ (805) to determine if an alert needs to be generated. If the concentration exceeds a predefined threshold (‘Yes’ path), an ‘Generate Alert’ (806) is triggered. Otherwise (‘No’ path), the data is ‘Log Data’ (807) for future reference. The process concludes at ‘End’ (808).

This invention aims to enhance the process of identifying suspected or reported events within a specific location and specific time or timeframe/period. It is up to the relevant authorities to determine which aspects to implement and to introduce certain restrictions. For example, the monitored area could include regions with high crime rates, parks, protected sites, or similar locations.

This application is part of our wider portfolio of analytical and resilience technologies. Explore related work on the About page and discover technical parallels in Ideas, Solutions and Patents.