[go: up one dir, main page]

WO2025207878A1 - Method and system for map building using perception and motion sensors - Google Patents

Method and system for map building using perception and motion sensors

Info

Publication number
WO2025207878A1
WO2025207878A1 PCT/US2025/021747 US2025021747W WO2025207878A1 WO 2025207878 A1 WO2025207878 A1 WO 2025207878A1 US 2025021747 W US2025021747 W US 2025021747W WO 2025207878 A1 WO2025207878 A1 WO 2025207878A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
platform
perception
model
sensor data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/021747
Other languages
French (fr)
Inventor
Jacques Georgy
Abdelrahman ALI
Christopher Goodall
Dylan KRUPITY
Noah GIUSTINI
Seyed Mohammad Mohammadi Jahromi
Zhengwei Li
Zhenghang DUAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
InvenSense Inc
Original Assignee
InvenSense Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by InvenSense Inc filed Critical InvenSense Inc
Publication of WO2025207878A1 publication Critical patent/WO2025207878A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles

Definitions

  • This disclosure generally relates to providing a navigation solution for a moving platform and more specifically to such techniques that employ perception and motion sensor information.
  • BACKGROUND [003]
  • the field of autonomous navigation has undergone significant evolution over the past few decades, driven by advancements in sensor technology, computational power, and algorithmic sophistication.
  • At the forefront of potential applications are autonomous road vehicles.
  • the benefits are multifold, with significant improvements to safety being among the most important. Unlike human drivers, automated systems are not affected by fatigue or distractions, thereby having a much faster reaction time to conditions on the road which can dramatically reduce accidents and save lives.
  • GNSS Global Navigation Satellite System
  • GLONASS Global Navigation Satellite System
  • Galileo and/or Beidou Galileo and/or Beidou
  • Traditional navigation systems primarily reliant on basic Global Navigation Satellite System (GNSS, including specific implementations including without limitation the Global Positioning System - GPS, the Global Navigation Satellite System - GLONASS, Galileo and/or Beidou) technology and simple sensor arrays, have gradually given way to more complex, integrated systems.
  • GNSS Global Positioning System
  • GLONASS Global Navigation Satellite System
  • Galileo and/or Beidou Beidou
  • IMU-based systems such as inertial navigation systems (INS)
  • INS inertial navigation systems
  • TPI-079PCT US PATENT APPLICATION GNSS signals are sometimes completely blocked or affected by severe multipath.
  • a traditional approach to accurately estimate the pose of the vehicle is by using sensor fusion algorithms to integrate IMU and GNSS signals.
  • the performance of the GNSS system can also be enhanced by using Differential GNSS stations that can broadcast ionospheric and tropospheric errors to adjacent GNSS receivers.
  • Additional sensors and systems may help overcome limitations noted above of the standalone-based systems in such cases to achieve reliable navigation in all environments and conditions.
  • perception sensors can provide rich information when paired with maps, including high definition (HD) maps.
  • HD high definition
  • radars Some of the most common sensors are radars, optical cameras, and light detection and ranging (lidar), but infrared (IR) cameras, ultrasonic detectors and others may also be employed.
  • IR infrared
  • a perception sensor commonly found in cars today is radar. Key benefits to radar include being robust to adverse weather conditions, insensitive to lighting variations, and providing long and accurate range measurements. They may also be packaged behind optically opaque vehicle paneling, thereby offering industrial designers a degree of flexibility that is not possible with other perception sensors. Although automotive imaging radars have a lower resolution than lidar, recent advances are narrowing the gap.
  • the current generation of state-of- the-art automotive imaging radars provide high-rate information on multiple dynamic targets in an extremely cluttered scene in a 4D domain, consisting of range, doppler, azimuth, and elevation measurements.
  • radar still presents a challenge for HD map matching localization techniques because of the sparseness of the data and a lower angular resolution than lidar.
  • the integration of a multi- radar configuration effectively expands the radar field of view, providing a wider or even up to 360-degree horizontal coverage while maintaining a high scan rate. The increased coverage and more numerous detections can aid the HD map matching process achieve a better result.
  • Such a system is shown to be effective in enabling an accurate and reliable navigation system for both vehicle and robotic platforms in GNSS degraded or denied environments.
  • a multi-radar configuration can be an effective tool for imaging a scene, particularly if recent sensors are used with higher angular and range resolutions.
  • REFERENCE NO: TPI-079PCT US PATENT APPLICATION [007]
  • vision which mainly consists of an array of cameras or other suitable optical sensor and image processing algorithms, characterized by operating substantially within the visible wavelength spectrum. It enables the system to perceive and interpret visual data, facilitating complex tasks such as object recognition, terrain analysis, and even decision-making based on visual inputs.
  • the integration of vision systems, particularly the use of cameras, in navigation systems marks a significant technological evolution in the realm of autonomous and assisted navigation.
  • Imaging radars stand out for their robust performance in adverse weather conditions. Unlike cameras, they are not hindered by fog, rain, or snow, making them reliable in a wide range of environmental settings. Their ability to detect objects at long ranges is another significant advantage, particularly beneficial in early warning systems and long-distance navigation tasks. Furthermore, the penetrative capability of radar waves allows them to detect objects that are not visually apparent, such as obstacles hidden by foliage or thin walls.
  • imaging radars do have drawbacks. They generally offer lower spatial resolution compared to cameras, making it challenging to identify small or detailed features. [009] On the other hand, cameras provide high-resolution imagery that is more intuitive to interpret, making them ideal for applications requiring detailed visual information.
  • radars provide reliable long-range detection and perform well in adverse weather conditions
  • cameras offer high-resolution imagery for detailed environmental analysis.
  • systems can leverage the strengths of both: radars can be used for initial detection and rough estimation of an object's location and movement, while cameras can provide detailed visual information for closer inspection and identification.
  • This complementary use allows for more robust and versatile navigation and sensing solutions, applicable in a variety of fields including autonomous vehicles, aerial surveillance, and maritime navigation.
  • HD maps including different formats like occupancy grid maps and point clouds, are used as one of the main sources to enable the solution from different perception sensors such as lidar, camera, or radar.
  • the navigation system can use 2D/3D perception-based maps generated through crowdsourcing techniques for mapped areas from data collected over time using an integrated system supplied with perception sensors. This map can then be used in subsequent runs as a global reference map for localization purposes.
  • 2D/3D perception-based maps generated through crowdsourcing techniques for mapped areas from data collected over time using an integrated system supplied with perception sensors. This map can then be used in subsequent runs as a global reference map for localization purposes.
  • current navigation systems often face challenges in scenarios where GNSS signals are weak or obstructed in complex urban environments with numerous dynamic obstacles, and in conditions requiring high-level decision-making based on limited data. These challenges underscore the need for more REFERENCE NO: TPI-079PCT US PATENT APPLICATION integrated, intelligent, and versatile navigation systems.
  • measurements from perception sensors may be used with information from motion sensors to help build a map online during a given navigation session to aid positioning as described in the following materials.
  • This disclosure includes a method for providing an integrated navigation solution in real-time for a device within a moving platform. The method may involve obtaining motion sensor data from a sensor assembly of the device and obtaining perception sensor data from at least one perception sensor for the platform. An integrated navigation solution for the platform may be generated based at least in part on the obtained motion sensor data.
  • An online map for an area encompassing the platform in a first instance of time may be built using perception sensor data based at least in part on the integrated navigation solution during the first instance of time.
  • the integrated navigation solution may then be revised in a second instance of time based at least in part on the motion sensor data using a nonlinear state estimation technique.
  • the nonlinear state estimation technique may use a prediction phase involving a system model to propagate predictions about a state of the platform and an update phase involving at least one measurement model relating measurements to the state is used to update the state of the platform, wherein the nonlinear state estimation technique comprises using a nonlinear measurement model for perception sensor data, such that integrating the motion sensor data and perception sensor data in the nonlinear state estimation technique is tightly-coupled.
  • Embodiments described herein may be discussed in the general context of processor-executable instructions residing on some form of non-transitory processor- readable medium, such as program modules, executed by one or more computers or other devices.
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • the functionality of the program modules may be combined or distributed as desired in various embodiments.
  • a single block may be described as performing a function or functions; however, in actual practice, the function or functions performed by that block may be performed in a single component or across multiple components, and/or may be performed using hardware, using software, or using a combination of hardware and software.
  • the non-transitory processor-readable storage medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, other known storage media, and the like.
  • RAM synchronous dynamic random access memory
  • ROM read only memory
  • NVRAM non-volatile random access memory
  • EEPROM electrically erasable programmable read-only memory
  • FLASH memory other known storage media, and the like.
  • the techniques additionally, or alternatively, may be realized at least in part by a processor-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer or other processor.
  • a carrier wave may be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN).
  • LAN local area
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of an MPU and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with an MPU core, or any other such configuration.
  • perception sensor data can be directly integrated with motion sensor data when generating a navigation solution for a moving platform.
  • Suitable illustrations of these techniques may be found in commonly- owned patents, U. S. Patent 11,422,253, which involves the use of radar measurements, and U.S. Patent No. 11,875,519, which involves the use of optical samples, both of which are incorporated by reference in their entirety.
  • these and other techniques that employ data from perceptions sensors typically rely on the availability of pre-built maps for the area encompassing the platform. As will be appreciated, such pre-built map information may not exist for certain areas and, even when available, obtaining the pre-built map information requires time and communication bandwidth.
  • the techniques of this disclosure are directed building an online map during a current run of a navigation session using at least one type of perception sensor and then utilizing the online map to directly integrate perception sensor data of either REFERENCE NO: TPI-079PCT US PATENT APPLICATION the same or different type with motion sensor data to generate an integrated navigation solution.
  • the online map is built with one type of perception sensor data and the same type of perception sensor data is directly integrated with the motion sensor data with the online map.
  • the online map is built with one type of perception sensor data and another type of perception sensor data is directly integrated with the motion sensor data with the online map.
  • the platform is a wheel-based vehicle or other similar vessel intended for use on land, but may also be marine or airborne. As such, the platform may also be referred to as the vehicle. However, the platform may also be a pedestrian or a user undergoing on foot motion.
  • motion sensor data includes information from accelerometers, gyroscopes, or an IMU.
  • Inertial sensors are self-contained sensors that use gyroscopes to measure the rate of rotation/angle, and accelerometers to measure the specific force (from which acceleration is obtained). Inertial sensors data may be used in an INS, which is a non-reference based relative positioning system.
  • the INS readings can subsequently be integrated over time and used to determine the current position, velocity and orientation angles of the platform.
  • measurements are integrated once for gyroscopes to yield orientation angles and twice for accelerometers to yield position of the platform incorporating the orientation angles.
  • the measurements of gyroscopes will undergo a triple integration operation during the process of yielding position.
  • the device that is contained within the platform (which as noted may be a vehicle or vessel of any type, and in some applications, may be a person,) and may have one or more sources of navigational or positional information.
  • the device is strapped or tethered in a fixed orientation with respect to the platform.
  • the device is “strapped,” “strapped down,” or “tethered” to the platform when it is physically connected to the platform in a fixed manner that does not change with time during navigation, in the case of strapped devices, the relative position and orientation between the device and platform does not change with time during navigation.
  • this varying misalignment between the frame of the device and the frame of the platform may be determined using the techniques of this disclosure and correspondingly compensated for.
  • the device may be “non-strapped” in two scenarios: where the mobility of the device within the platform is “unconstrained”, or where the mobility of the device within the platform is “constrained.”
  • One example of “unconstrained” mobility may be a person moving on foot and having a portable device such as a smartphone in the their hand for texting or viewing purposes (hand may also move), at their ear, in hand and dangling/swinging, in a belt clip, in a pocket, among others, where such use cases can REFERENCE NO: TPI-079PCT US PATENT APPLICATION change with time and even each use case can have a changing orientation with respect to the user.
  • the mobility of the device within the platform is “unconstrained” is a person in a vessel or vehicle, where the person has a portable device such as a smartphone in the their hand for texting or viewing purposes (hand may also move), at their ear, in a belt clip, in a pocket, among others, where such use cases can change with time and even each use case can have a changing orientation with respect to the user.
  • An example of “constrained” mobility may be when the user enters a vehicle and puts the portable device (such as smartphone) in a rotation-capable holder or cradle. In this example, the user may rotate the holder or cradle at any time during navigation and thus may change the orientation of the device with respect to the platform or vehicle.
  • the mobility of the device may be constrained or unconstrained within the platform and may be moved or tilted to any orientation within the platform and the techniques of this disclosure may still be applied under all of these conditions.
  • some embodiments described below include a portable, hand-held device that can be moved in space by a user and its motion, location and/or orientation in space therefore sensed.
  • the techniques of this disclosure can work with any type of portable device as desired, including a smartphone or the other exemplary devices noted below. It will be appreciated that such devices are often carried or associated with a user and thus may benefit from providing navigation solutions using a variety of inputs.
  • the techniques of this disclosure may also be applied to other types of devices that are not handheld, including devices integrated with autonomous or piloted vehicles whether land-based, aerial, or underwater vehicles, or equipment that may be used with such vehicles.
  • the platform may be a drone, also known as an unmanned aerial vehicle (UAV).
  • UAV unmanned aerial vehicle
  • device 100 may be implemented as a device or apparatus, such a strapped, non-strapped, tethered, or non-tethered device as described above, which when non-strapped, the mobility of the device may be constrained or unconstrained within the platform and may be moved or tilted to any orientation within the platform.
  • device 100 includes a processor 102, which may be one or more microprocessors, central processing units (CPUs), or other processors to run software programs, which may be stored in memory 104, associated with the functions of device 100. Multiple layers of software can be provided in memory 104, which may be any combination of computer readable medium such as electronic memory or other storage medium such as hard disk, optical disk, etc., for use with the processor 102.
  • an operating system layer can be provided for device 100 to control and manage system resources in real time, enable functions of application software and other layers, and interface application programs with other software and functions of device 100.
  • different software application programs such as menu navigation software, games, camera function control, navigation software, communications software, such as telephony or wireless local area network (WLAN) software, or any of a wide variety of other software and functional interfaces can be provided.
  • multiple different applications can be provided on a single device 100, and in some of those embodiments, multiple applications can run simultaneously.
  • Device 100 includes at least one sensor assembly 106 for providing motion sensor data representing motion of device 100 in space, including inertial sensors such as an accelerometer and a gyroscope, other motion sensors including a magnetometer, a pressure sensor or others may be used in addition.
  • Motion sensors represent a self- contained source of navigational information.
  • sensor assembly 106 measures one or more axes of rotation and/or one or more axes of acceleration of the device.
  • sensor assembly 106 may include inertial rotational motion sensors or inertial linear motion sensors.
  • the rotational motion sensors may be gyroscopes to measure angular velocity along one or more orthogonal axes and the linear motion sensors may be accelerometers to measure linear acceleration along one or more orthogonal axes.
  • three gyroscopes and three accelerometers may be employed, such that a sensor fusion operation performed by processor 102, or other processing resources of device 100, combines data REFERENCE NO: TPI-079PCT US PATENT APPLICATION from sensor assembly 106 to provide a six axis determination of motion or six degrees of freedom (6DOF).
  • sensor assembly 106 may include a magnetometer measuring along three orthogonal axes and output data to be fused with the gyroscope and accelerometer inertial sensor data to provide a nine axis determination of motion.
  • sensor assembly 106 may also include a pressure sensor to provide an altitude determination that may be fused with the other sensor data to provide a ten axis determination of motion.
  • sensor assembly 106 may be implemented using Micro Electro Mechanical System (MEMS), allowing integration into a single small package.
  • MEMS Micro Electro Mechanical System
  • Device 100 also implements at least one perception sensor providing perception sensor data 112, including one or more of an optical camera, a thermal camera, an infra-red imaging sensor, a light detection and ranging (LiDAR or lidar) system, a radar system, an ultrasonic sensor or other suitable sensor that records images or samples to help classify objects detected in the surrounding environment.
  • an optical camera including one or more of an optical camera, a thermal camera, an infra-red imaging sensor, a light detection and ranging (LiDAR or lidar) system, a radar system, an ultrasonic sensor or other suitable sensor that records images or samples to help classify objects detected in the surrounding environment.
  • LiDAR or lidar light detection and ranging
  • Device 100 obtains perception sensor data 112 from any perception sensor such as those indicated above, which may be integrated with device 100, may be associated or connected with device 100, may be part of the platform or may be implemented in any other desired manner.
  • device 100 may also employ external sensor 108.
  • external means a sensor that is not integrated with sensor assembly 106 and may be remote or local to device 100.
  • sensor assembly 106 and/or external sensor 108 may be configured to measure one or more other aspects about the environment surrounding device 100. This is optional and not required in all embodiments.
  • a pressure sensor and/or a magnetometer may be used to refine motion determinations.
  • processor 102, memory 104, sensor assembly 106, and other components of device 100 may be coupled through bus 110, which may REFERENCE NO: TPI-079PCT US PATENT APPLICATION be any suitable bus or interface, such as a peripheral component interconnect express (PCIe) bus, a universal serial bus (USB), a universal asynchronous receiver/transmitter (UART) serial bus, a suitable advanced microcontroller bus architecture (AMBA) interface, an Inter-Integrated Circuit (I2C) bus, a serial digital input output (SDIO) bus, a serial peripheral interface (SPI) or other equivalent.
  • PCIe peripheral component interconnect express
  • USB universal serial bus
  • UART universal asynchronous receiver/transmitter
  • AMBA advanced microcontroller bus architecture
  • I2C Inter-Integrated Circuit
  • SDIO serial digital input output
  • SPI serial peripheral interface
  • a navigation solution based on the motion sensor data and absolute navigational information may be output by integration module 114.
  • a navigation solution comprises at least position and may also include attitude (or orientation) and/or velocity. Determining the navigation solution may involve sensor fusion or similar operations performed by the processor 102, which may be using the memory 104, or any combination of other processing resources.
  • device 100 also has a source of absolute navigational information 116, such as a Global Navigation Satellite System (GNSS) receiver, including without limitation the Global Positioning System (GPS), the Global Navigation Satellite System (GLONASS), Galileo and/or Beidou, as well as WiFi TM positioning, cellular tower positioning, Bluetooth TM positioning beacons or other similar methods when deriving a navigation solution.
  • GNSS Global Navigation Satellite System
  • Integration module 114 may also be configured to use information from a wireless communication protocol to provide a navigation solution determination using signal trilateration.
  • Any suitable protocol including cellular-based and wireless local area network (WLAN) technologies such as Universal Terrestrial Radio Access (UTRA), Code Division Multiple Access (CDMA) networks, Global System for Mobile Communications (GSM), the Institute of Electrical and Electronics Engineers (IEEE) 802.16 (WiMAX), Long Term Evolution (LTE), IEEE 802.11 (WiFi TM ) and others may be employed.
  • the source of absolute navigational information represents a “reference-based” system that depend upon external sources of information, as opposed to self-contained navigational information REFERENCE NO: TPI-079PCT US PATENT APPLICATION that is provided by self-contained and/or “non-reference based” systems within a device/platform, such as sensor assembly 106 as noted above.
  • device 100 may include communications module 118 for any suitable purpose, including for transmitting map building derived as the platform traverses an area.
  • Communications module 118 may employ a Wireless Local Area Network (WLAN) conforming to Institute for Electrical and Electronic Engineers (IEEE) 802.11 protocols, featuring multiple transmit and receive chains to provide increased bandwidth and achieve greater throughput.
  • WLAN Wireless Local Area Network
  • IEEE Institute for Electrical and Electronic Engineers
  • WiGIG TM 802.11ad
  • WiGIG TM includes the capability for devices to communicate in the 60 GHz frequency band over four, 2.16 GHz-wide channels, delivering data rates of up to 7 Gbps.
  • may also involve the use of multiple channels operating in other frequency bands, such as the 5 GHz band, or other systems including cellular-based and WLAN technologies such as Universal Terrestrial Radio Access (UTRA), Code Division Multiple Access (CDMA) networks, Global System for Mobile Communications (GSM), IEEE 802.16 (WiMAX), Long Term Evolution (LTE), other transmission control protocol, internet protocol (TCP/IP) packet-based communications, or the like may be used.
  • UTRA Universal Terrestrial Radio Access
  • CDMA Code Division Multiple Access
  • GSM Global System for Mobile Communications
  • WiMAX WiMAX
  • LTE Long Term Evolution
  • TCP/IP internet protocol
  • multiple communication systems may be employed to leverage different capabilities.
  • communications involving higher bandwidths may be associated with greater power consumption, such that other channels may utilize a lower power communication protocol such as BLUETOOTH®, ZigBee®, ANT or the like.
  • a wired connection may also be employed.
  • communication may be direct or indirect, such as through one or multiple interconnected networks.
  • networks such as client/server, peer-to- peer, or hybrid architectures
  • computing systems can be connected together by wired or wireless systems, by local networks or widely distributed networks.
  • networks are coupled to the Internet, which provides an infrastructure for widely distributed computing and encompasses many different networks, though any network infrastructure can be used for exemplary communications made incident to the techniques as described in various embodiments.
  • a computer readable medium such as electronic memory or other storage medium such as hard disk, optical disk, flash drive, etc.
  • an operating system layer can be provided for device 100 to control and manage system resources in real time, enable functions of application software and other layers, and interface application programs with other software and functions of device 100.
  • one or more motion algorithm layers may provide motion algorithms for lower-level processing of raw sensor data provided from internal or external sensors.
  • a sensor device driver layer may provide a software interface to the hardware sensors of device 100.
  • Embodiments of this disclosure may feature any desired division of processing between processor 102and other processing resources, as appropriate for the applications and/or hardware being employed.
  • aspects implemented in software may include but are not limited to, application software, firmware, resident software, microcode, etc, and may take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system, such as processor 102, a dedicated processor or any other processing resources of device 100.
  • FIG. 2 features of a different device architecture are depicted in FIG. 2 with high level schematic blocks in the context of device 200.
  • device 200 includes a host processor 202 and memory 204 similar to the above embodiment.
  • Device 200 includes at least one sensor assembly for providing motion sensor data, as shown here in the form of integrated motion processing unit (MPU®) 206 or any other sensor processing unit (SPU), featuring sensor processor 208, memory 210 and internal sensor 212.
  • Memory 210 may store REFERENCE NO: TPI-079PCT US PATENT APPLICATION algorithms, routines or other instructions for processing data output by internal sensor 212 and/or other sensors as described below using logic or controllers of sensor processor 208, as well as storing raw data and/or motion data output by internal sensor 212 or other sensors.
  • Memory 210 may also be used for any of the functions associated with memory 204.
  • Internal sensor 212 may be one or more sensors for measuring motion of device 200 in space as described above, including inertial sensors such as an accelerometer and a gyroscope, other motion sensors including a magnetometer, a pressure sensor or others may be used in addition.
  • inertial sensors such as an accelerometer and a gyroscope
  • other motion sensors including a magnetometer, a pressure sensor or others
  • Exemplary details regarding suitable configurations of host processor 202 and MPU 206 may be found in, commonly owned U.S. Patent Nos.8,250,921, issued August 28, 2012, and 8,952,832, issued February 10, 2015, which are hereby incorporated by reference in their entirety.
  • Suitable implementations for MPU 206 in device 200 are available from InvenSense, Inc. of San Jose, Calif.
  • another sensor assembly in the form of external sensor 214 may represent sensors, such as inertial motion sensors (i.e., accelerometer and/or a gyroscope), other motion sensors or other types of sensors as described above.
  • “external” means a sensor that is not integrated with MPU 206 and may be remote or local to device 200.
  • MPU 206 may receive data from an auxiliary sensor 216 configured to measure one or more aspects about the environment surrounding device 200. This is optional and not required in all embodiments.
  • a pressure sensor and/or a magnetometer may be used to refine motion determinations made using internal sensor 212.
  • Device 200 also implements at least one perception sensor, which may include one or more of the sensors discussed above, that provides perception sensor data 222. As discussed above, the techniques of this disclosure involve integrating perception sensor data 22 with the motion sensor data provided by internal sensor 212 (or other sensors) using an online map that is built in run to provide the navigation solution for device 200.
  • device 200 may have a source of absolute navigational information 226 and may include communications module 228 for any suitable purpose.
  • Source of absolute navigational information 226 and/or communications module 228 may have any of the characteristics discussed above with regard to source of absolute navigational information 116 and communications module 118.
  • host processor 202 and/or sensor processor 208 may be one or more microprocessors, central processing units (CPUs), or other processors which run software programs for device 200 or for other applications related to the functionality of device 200.
  • Embodiments of this disclosure may feature any desired division of processing between host processor 202, MPU 206 and other processing resources, as appropriate for the applications and/or hardware being employed.
  • the state estimation or filtering technique estimates the errors in the navigation states obtained by the mechanization, so the estimated state vector by this state estimation or filtering technique is for the error states, and the system model is an error-state system model which transitions the previous error-state to the current error- state.
  • the mechanization output is corrected for these estimated errors to provide the corrected navigation states, such as corrected position, velocity and attitude.
  • the estimated error-state is about a nominal value which is the mechanization output, the mechanization can operate either unaided in an open loop mode, or can receive feedback from the corrected states, this case is called closed-loop mode.
  • Conventional, linear state estimation techniques such as a Kalman filter (KF) or an extended Kalman filter (EKF) require linearized approximations.
  • the system can achieve a multi-dimensional view of its surroundings, crucial for accurate positioning and navigation in complex urban environments.
  • perception components imaging radar and vision
  • other types of perception sensors can be employed alternatively or in addition.
  • the techniques of this disclosure represent a notable advancement in the field of navigation technology, addressing many of the limitations of existing systems while opening new possibilities for autonomous navigation.
  • Map information including high definition (HD) maps, are integral to the functionality of modern navigation systems, particularly in the realm of autonomous vehicles and advanced driver-assistance systems (ADAS).
  • the navigation system conventionally requires two types of maps to run and provide a solution; local map and a reference map, such as a global map or a semi- global map.
  • the local map is the map created by the readings from the perception sensor and the navigation system during the real-time session by the navigation filter.
  • Examples of real-time navigation filters include Kalman filter (KF) and Particle filters (PF).
  • KF Kalman filter
  • PF Particle filters
  • PF Particle filters
  • a local map can be created for the overall navigation solution from the PF or for various particles in the PF).
  • the local map could be generated from collected data over a short period of time from one epoch (one instant) up to a few seconds.
  • the data is collected from different routes to cover the traversed area at different times.
  • the collected data is accumulated and filtered to build the global map.
  • the global map tiles can be stored on a local device or on cloud.
  • the area map can be retrieved to be used during the navigation session to provide an aid for the system versus the local map.
  • the conventional requirement of a pre-existing map (also called a global map herein) for the traversed area to be able to provide an absolute navigation solution may be a drawback as there is a need to have the pre-built map to run the navigation system.
  • perception odometry examples include visual odometry, radar odometry, or Lidar odometry; and examples perception-inertial odometry are visual-inertial odometry, radar-inertial odometry, or Lidar-inertial odometry.
  • Absolute positioning methods are more accurate and more robust as compared to relative navigation methods (using relative navigation information aids), as the latter can accumulate errors over time and lack an absolute sense. The only benefit of relative navigation methods is that they do not require any pre-built map of the environment.
  • suitable measurement models include a range- based model based at least in part on a probability distribution of measured ranges using an estimated state of the platform and the map information, a nearest object likelihood model based at least in part on a probability distribution of distance to an object detected using the perception sensor data, an estimated state of the platform and a nearest object identification from the map information, a map matching model based at least in part on a probability distribution derived by correlating a reference map, such as the online map that is built during the concurrent navigation session, to a local map generated using the perception sensor data and an estimated state of the platform, and a closed-form model based at least in part on a relation between an estimated state of the platform and ranges to objects from the map information derived from the perception sensor data.
  • the techniques of this disclosure include a method for providing an integrated navigation solution in real-time for a device within a moving platform.
  • the method may involve obtaining motion sensor data from a sensor assembly of the device and obtaining perception sensor data from at least one perception sensor for the platform.
  • An integrated navigation solution for the platform may be generated based at least in part on the obtained motion sensor data so that an online map for an area encompassing the platform in a first instance of time using perception sensor data may be built based at least in part on the integrated navigation solution during the first instance of time.
  • the at least one perception sensor is at least one of radar, an optical camera, lidar, a thermal camera, an IR camera or an ultrasonic sensor.
  • the at least one perception sensor may be at least one radar that outputs radar measurements for the platform.
  • the at least one perception sensor may be at least one optical sensor that outputs optical samples for the platform.
  • the at least one perception sensor may be at least one radar that outputs radar measurements for the platform and at least one optical sensor that outputs optical samples for the platform.
  • depth information for objects detected within the optical samples may be determined using at least one of: i) estimating depth for an object and deriving range, bearing and elevation; and ii) obtaining depth readings for an object from the at least one optical sensor and deriving range, bearing and elevation.
  • a scene reconstruction operation may be determined for a local area surrounding the platform based at least in part on the determined depth information for objects detected within the optical samples.
  • the at least one perception sensor may include at least two types of perception sensors and wherein building the online map comprises using one type of perception sensor and wherein integrating the motion sensor data comprises integrating data from another type of perception sensor directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the data from the another type of perception sensor in the nonlinear state estimation technique.
  • building an online map for an area encompassing the platform REFERENCE NO: TPI-079PCT US PATENT APPLICATION may be based at least in part on satisfaction of a favorable condition.
  • the measurement model comprises at least one of: i) a range- based model based at least in part on a probability distribution of measured ranges using an estimated state of the platform and the online map; ii) a nearest object likelihood model based at least in part on a probability distribution of distance to an object detected using the optical samples, an estimated state of the platform and a nearest object identification from the online map; iii) a map matching model based at least in part on a probability distribution derived by correlating the online map to a local map generated using the optical samples and an estimated state of the platform; and iv) a closed-form model based at least in part on a relation between an estimated state of the platform and ranges to objects from the online map.
  • the nonlinear measurement model further comprises models for perception sensor errors comprising any one or any combination of environmental errors, sensor-based errors and dynamic errors.
  • the nonlinear state estimation technique comprises at least one of: i) an error-state system model; ii) a total-state system model, wherein the integrated navigation solution is output directly by the total-state model; and iii) a system model receiving input from an additional state estimation technique that integrates the motion sensor data.
  • the nonlinear state estimation technique may be an error-state system model such that providing the integrated navigation solution may involve correcting an inertial mechanization output with the updated nonlinear state estimation technique.
  • the system model may be a system model receiving input from an additional state estimation technique, which integrates any one or any combination of: i) inertial sensor data; ii) odometer or means for obtaining platform speed data; iii) pressure sensor data; iv) magnetometer data; and v) absolute navigational information.
  • the system model of the nonlinear state estimation technique may further include a motion sensor error model.
  • the nonlinear state estimation technique may be at least one of: i) a Particle Filter (PF); ii) a PF, wherein the PF comprises a Sampling/Importance Resampling (SIR) PF; and iii) a PF, wherein the PF comprises a Mixture PF.
  • PF Particle Filter
  • SIR Sampling/Importance Resampling
  • PF comprises a Mixture PF.
  • a source of absolute navigational information may be used REFERENCE NO: TPI-079PCT US PATENT APPLICATION when generating the integrated navigation solution. Building the online map may be performed based at least in part on quality of the absolute navigational information.
  • the method may also involve storing and retrieving the online map based on a current position of the platform.
  • a misalignment between a frame of the sensor assembly and a frame of the platform may be determined, wherein the misalignment is at least one of: i) a mounting misalignment; and ii) a varying misalignment.
  • a misalignment between a frame of the sensor assembly and a frame of the platform may be determined, wherein the misalignment is determined using any one or any combination of: i) a source of absolute velocity; ii) a radius of rotation calculated from the motion sensor data; and iii) leveled horizontal components of acceleration readings along forward and lateral axes from the motion sensor data.
  • this disclosure is also directed to a system for providing an integrated navigation solution in real-time for a device within a moving platform.
  • the system may include a device having a sensor assembly configured to output motion sensor data, at least one perception sensor providing perception sensor data and at least one processor, coupled to receive the motion sensor data and the perception sensor data.
  • the at least one processor may be operative to generate an integrated navigation solution for the platform based at least in part on the motion sensor data, build an online map for an area encompassing the platform in a first instance of time using perception sensor data based at least in part on the integrated navigation solution during the first instance of time, revise the integrated navigation solution in a second instance of time based at least in part on the motion sensor data using a nonlinear state estimation technique, wherein a prediction phase involving a system model is used to propagate predictions about a state of the platform and an update phase involving at least one measurement model relating measurements to the state is used to update the state of the platform, wherein the nonlinear state estimation technique comprises using a nonlinear measurement model for perception sensor data, wherein integrating the motion sensor data and perception sensor data in the nonlinear state estimation technique is tightly- coupled, and wherein the revising comprises: i) using the received motion sensor data in the nonlinear state estimation technique; and ii) integrating perception sensor data REFERENCE NO: TPI-079PCT US
  • the at least one perception sensor is at least one of radar, an optical camera, lidar, a thermal camera, an IR camera or an ultrasonic sensor.
  • the at least one perception sensor may be at least one radar that outputs radar measurements for the platform.
  • the at least one perception sensor may be at least one optical sensor that outputs optical samples for the platform.
  • the at least one perception sensor may be at least one radar that outputs radar measurements for the platform and at least one optical sensor that outputs optical samples for the platform.
  • the at least one perception sensor may include at least two types of perception sensors and wherein building the online map comprises using one type of perception sensor and wherein integrating the motion sensor data comprises integrating data from another type of perception sensor directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the data from the another type of perception sensor in the nonlinear state estimation technique.
  • the measurement model comprises at least one of: i) a range- based model based at least in part on a probability distribution of measured ranges using an estimated state of the platform and the online map; ii) a nearest object likelihood model based at least in part on a probability distribution of distance to an object detected using the optical samples, an estimated state of the platform and a nearest object identification from the online map; iii) a map matching model based at least in part on a probability distribution derived by correlating the online map to a local map generated using the optical samples and an estimated state of the platform; and iv) a closed-form model based at least in part on a relation between an estimated state of the platform and ranges to objects from the online map.
  • the nonlinear state estimation technique comprises at least one of: i) an error-state system model; ii) a total-state system model, wherein the integrated navigation solution is output directly by the total-state model; and iii) a system model receiving input from an additional state estimation technique that integrates the motion sensor data.
  • the nonlinear state estimation technique may be an REFERENCE NO: TPI-079PCT US PATENT APPLICATION error-state system model such that providing the integrated navigation solution may involve correcting an inertial mechanization output with the updated nonlinear state estimation technique.
  • the sensor assembly includes an accelerometer and a gyroscope.
  • the sensor assembly may be implemented as a Micro Electro Mechanical System (MEMS).
  • MEMS Micro Electro Mechanical System
  • the system includes a source of absolute navigational information.
  • the system includes any one or any combination of: A) an odometer or means for obtaining platform speed; B) a pressure sensor; C) a magnetometer.
  • EXAMPLES It is contemplated that the present methods and systems may be used for any application involving integrating perception sensor data with motion sensor data to provide a navigation solution. Without any limitation to the foregoing, the present disclosure is further described by way of the following examples.
  • the embodiments of the navigation system and techniques of this disclosure comprise integration of four key technologies: Inertial Measurement Unit (IMU), Global Navigation Satellite System with Real-Time Kinematic (GNSS-RTK), odometer, and perception sensors. Each component has been carefully selected and optimized to work in concert with the others, resulting in a navigation solution that is greater than the sum of its parts.
  • IMU Inertial Measurement Unit
  • GNSS-RTK Global Navigation Satellite System with Real-Time Kinematic
  • Imaging radar may play a pivotal role in the navigation techniques of this disclosure. In areas where GNSS signals are weak or obstructed, such as urban canyons or under dense foliage, radar data can supplement positioning information.
  • the radar helps in maintaining positional accuracy even when GNSS data is compromised.
  • the integration of radar allows for a more comprehensive understanding of the platoform’s spatial orientation and movement.
  • the system can identify fixed landmarks or features, assisting in triangulating position.
  • Vision sensors provide the system with visual data to perceive the scene components and facilitating tasks such as object recognition, terrain analysis, and even decision-making based on visual inputs. Data could come from either mono camera, stereo cameras, or thermal camera making the navigation system to interact with its environment in a way that mimics human vision. Mono cameras capture single images, providing visual data like human eyesight which are particularly useful for tasks like lane detection, traffic sign recognition, and object classification.
  • Stereo cameras are simpler, cost-effective, and require less computational power for data processing.
  • stereo cameras are using two or more lenses, capture images from slightly different angles, allowing for depth perception through disparity maps which is ideal for 3D mapping, obstacle detection in depth, and enhanced environmental understanding.
  • Stereo cameras provide more detailed spatial information, crucial for precise navigation and complex decision-making processes.
  • the vision system interprets visual cues from the surroundings, essential for understanding the context of the environment. Combining visual data with radar, GNSS-RTK, IMU, and odometer inputs, the system can achieve a multi-dimensional view of its surroundings, crucial for accurate positioning and navigation in complex urban environments.
  • GNSS Global Navigation Satellite System Real-Time Kinematic
  • GNSS-RTK Global Navigation Satellite System Real-Time Kinematic
  • Precise Point Positioning is another technique which is being used in geodesy and navigation that allows GPS and GNSS receivers to determine their location on Earth with a high degree of accuracy, typically within a few centimeters or decimeters.
  • PPP can achieve its high accuracy without the need for a local reference station. This is accomplished by using precise REFERENCE NO: TPI-079PCT US PATENT APPLICATION satellite orbit and clock data. PPP relies on correction data that is either received in real- time via a communication link or applied post-mission in a process known as post- processing. These corrections account for various error sources affecting satellite navigation signals, including satellite clock and orbit errors, atmospheric delays (ionospheric and tropospheric), and other systemic biases.
  • RTK Real-Time Kinematic
  • GNSS Globalstar Satellite System
  • the key advantages of PPP include its global availability, as it does not depend on the proximity to a base station, and its ability to provide high-accuracy positioning anywhere in the world.
  • the positioning of a moving platform is commonly achieved using known reference-based systems, such as GNSS.
  • the GNSS comprises a group of satellites that transmit encoded signals to receivers on the ground that, by means of trilateration techniques, can calculate their position using the travel time of the satellites’ signals and information about the satellites’ current location. Such positioning techniques are also commonly utilized to position the moving platform.
  • GNSS information may be augmented with additional positioning information obtained from complementary positioning systems.
  • Inertial motion sensors are “non-reference based” systems which provide measurements to a vehicle navigation system.
  • motion sensor data includes information from accelerometers, gyroscopes, or other implementations of an Inertial Measurement Unit (IMU).
  • IMU Inertial Measurement Unit
  • Inertial sensors are self-contained sensors that use gyroscopes to measure the rate of rotation/angle, and accelerometers to measure the specific force (from which acceleration is obtained).
  • Inertial sensors data may be used in an inertial navigation system (INS), which is a non-reference based relative positioning system.
  • INS inertial navigation system
  • the primary challenge in IMU data processing is to accurately determine the position, velocity, and orientation of an object, often in 3D space, from the raw sensor outputs.
  • One such “non-reference based” or relative positioning system is the REFERENCE NO: TPI-079PCT US PATENT APPLICATION inertial navigation system (INS).
  • INS inertial navigation system
  • INS inertial navigation system
  • a mechanization algorithm is used to get the orientation angle from 3D gyroscope and the linear acceleration, velocity, and position from 3D accelerometer.
  • measurements are integrated once for gyroscopes to yield orientation angles and twice for accelerometers to yield position of the device or platform incorporating the orientation angles.
  • gyroscopes will undergo a triple integration operation during the process of yielding position.
  • Inertial sensors alone are unsuitable for accurate positioning because the required integration operations of data results in positioning solutions that drift with time, thereby leading to an unbounded accumulation of errors.
  • another known complementary “nonreference based” system is a system for measuring speed/velocity information such as, for example, odometric information from a odometer within the platform. Odometric data can be extracted using sensors that measure the rotation of the wheel axes and/or steer axes of the platform (in case of wheeled platforms).
  • Wheel rotation information can then be translated into linear displacement, thereby providing wheel and platform speeds, resulting in an inexpensive means of obtaining speed with relatively high sampling rates.
  • the odometric data are integrated thereto in the form of incremental motion information over time.
  • common practice involves integrating the information/data obtained from the GNSS with that of the complementary system(s). For instance, to achieve a better positioning solution, INS and GNSS data may be integrated because they have complementary characteristics. INS readings are accurate in the short-term, but their errors increase without bounds in the long-term due to inherent sensor errors.
  • GNSS readings are not as accurate as INS in the short-term, but GNSS accuracy does not decrease with time, thereby providing long-term accuracy. Also, GNSS may suffer from outages due to signal blockage, multipath effects, interference or jamming, while INS is immune to these effects.
  • Speed information from the odometric readings may be used to enhance the performance of the integrated INS/GNSS solution by providing velocity updates, however, current INS/GNSS/Odometry systems continue to be plagued with the unbounded growth of errors over time during GNSS outages.
  • KF Kalman Filter
  • KF equations may be considered time update or “prediction” equations that are used to project forward in time the current state and error covariance estimates to obtain an a priori estimate for the next step or measurement update or “correction” equations that are used to incorporate a new measurement into the a priori estimate to obtain an improved posteriori estimate.
  • the INS/GNSS integration problem has nonlinear models.
  • the nonlinear INS/GNSS model has to be linearized around a nominal trajectory. This linearization means that the original (nonlinear) problem can be transformed into an approximated problem that may be solved optimally, rather than approximating the solution to the correct problem.
  • the accuracy of the resulting solution can thus be reduced due to the impact of neglected nonlinear and higher order terms. These neglected higher order terms are more influential and cause error growth in the positioning solution, in degraded and GNSS-denied environments, particularly when low-cost MEMS-based IMUs are used.
  • the traditional INS typically relies on a full inertial measurement unit (IMU) having three orthogonal accelerometers and three orthogonal gyroscopes. This full IMU setting has several sources of error, which can cause severe effects on the positioning performance.
  • IMU inertial measurement unit
  • the residual uncompensated sensor errors can cause position error composed of three additive quantities: (i) proportional to the cube of GNSS outage duration and the uncompensated horizontal REFERENCE NO: TPI-079PCT US PATENT APPLICATION gyroscope biases; (ii) proportional to the square of GNSS outage duration and the three accelerometers uncompensated biases, and (iii) proportional to the square of GNSS outage duration, the horizontal speed, and the vertical gyroscope uncompensated bias.
  • barometers play a significant role in height estimation which may be used with the previous mentioned navigation systems to enhance vertical accuracy, especially for off-road and autonomous vehicles.
  • the navigation system can estimate the vehicle elevation with a higher degree of accuracy. This is crucial in applications that need to maintain a specific altitude above ground level, or in off-road vehicles navigating through varied terrain, to provide precise height information.
  • the use of barometers for height estimation in vehicle navigation provides an additional layer of precision to navigational systems in environments where vertical positioning is as critical as horizontal.
  • Radars have numerous characteristics based on the signals used by the sensor, the covered area/volume by the radar, the accuracy and resolution of radar range/bearing, and the type of measurements logged by the sensor. Unlike traditional radars, imaging radars utilize sophisticated beamforming techniques to create high- resolution, two or three-dimensional images of the environment. The configuration often includes multiple transmitting and receiving elements, enabling the system to cover a wide field of view and capture detailed spatial information. The data processing and filtering stage in imaging radars involves the conversion of raw radar signals into meaningful spatial data. Techniques like doppler processing are used to determine the velocity of objects, while sophisticated data fusion algorithms can integrate radar data with information from other sensors for a more comprehensive environmental understanding.
  • imaging radars are pivotal. They provide critical data inputs for algorithms that estimate the vehicle's position, velocity, and trajectory.
  • the integration of imaging radar data into navigation systems requires the development of a model to use its data.
  • Pulse-based radars do not require complex computations like FMCW-based radars. Moreover, there is no doppler-range coupling like in the case of some modulation FMCW-based modulation scheme.
  • pulse-based radars leak power to adjacent bands, limiting the reuse of frequencies. This is a limitation of pulse-based radars because autonomous vehicles will be operating in the close vicinity, especially in urban areas.
  • Second aspect select the best radar operating band from the most two prominent operating bands; the 24 GHz and the 77 GHz. The choice of the operating band will affect the choice of the range accuracy and resolution of the radar alongside with the dimensions of the radar antenna.
  • Third aspect the provided radar measurements with the radar signal.
  • the radar signal processing unit estimates the range and doppler from the received reflections including the azimuth/bearing angle and elevation angle. By grouping range, doppler and power measurements from adjacent cells, a software layer might be able to estimate the centroid of all targets.
  • the data processing and filtering stage in vision systems is complicated. Initially, raw visual data undergoes preprocessing to adjust for variations in lighting, contrast, and to filter out noise. Advanced computer vision algorithms then analyze these images, detecting and classifying objects. Feature extraction techniques are important at this stage, enabling the system to identify key elements in the visual data that are relevant for navigation. Then a feature matching technique is used to identify the similarity between the different scenes which represent REFERENCE NO: TPI-079PCT US PATENT APPLICATION the maps. Scene reconstruction is used to construct a dynamic 3D model of the vehicle's environment. This model is continuously updated as new visual data is captured, providing a real-time, detailed understanding of the surroundings.
  • depth may estimated to provide depth data for objects detected within the optical samples.
  • range, bearing and elevation for the objects may be extracted and fed to the nonlinear state estimation technique.
  • the depth information can be estimated using neural network techniques.
  • depth readings may be directly available from the at least one optical sensor, such as when using stereo sensors or when a stream of samples are available.
  • range, bearing and elevation for objects within the samples may be extracted and fed to the nonlinear state estimation technique. Neural network techniques can also be applied.
  • a scene representing a local environment surrounding the platform may be reconstructed based on information from the at least one optical sensor, including the depth information discussed above. Then, the reconstructed local map can be compared to a reference map, such as the online map built according to the techniques of this disclosure.
  • a reference map such as the online map built according to the techniques of this disclosure.
  • neural network techniques can be employed during scene reconstruction. More explanation regarding this can be found later below in section 2.5.6.
  • vision systems provide essential information for determining the vehicle’s position relative to the road and other objects. Modeling for navigation involves integrating visual data with inputs from other sensors, like GNSS, IMUs, and odometry, to accurately estimate the vehicle's current state and predict future states.
  • Map building using vision samples may involve vision-based scene reconstruction, a sophisticated process in computer vision and robotics that is integral to understanding and interacting with the environment. This operation involves capturing visual data from the environment using cameras, which could be monocular, stereo, or a REFERENCE NO: TPI-079PCT US PATENT APPLICATION more complex array of cameras for a wider field of view and depth perception.
  • the core of scene reconstruction lies in converting these two-dimensional images into a three- dimensional model of the scene. In the reconstruction process, features from multiple images are extracted, matched, and triangulated to create a 3D representation.
  • SfM Structure from Motion
  • SLAM Simultaneous Localization and Mapping
  • the system can create maps with/without some features such traffic lights and bridges. Also, it can update the map to filter out the unwanted objects such as parked cars.
  • 2.5 Tightly-coupled Perception Sensor Integration And Online Map [00136]
  • embodiments of this disclosure involve providing a navigation solution with nonlinear Perception/INS/GNSS integration. Perception-based matching to a model and tight integration with senor-based navigation may be provided without the need of using a pre-exiting map for the environment.
  • FIG.6 One exemplary illustration is schematically depicted in FIG.6. As shown in the figure, the perception data is used to create a local map when initially generating a navigation solution for the platform as well as being used to build an online (semi-global) map.
  • This step requires careful handling to preserve important spatial relationships and to ensure scale consistency across the map.
  • One of the critical aspects of this process is dealing with distortions inherent in projecting a curved surface (like the Earth) onto a flat plane.
  • Different projection techniques prioritize preserving different properties, such as area, shape, distance, or direction. For example, some projections maintain area accuracy but distort shapes, particularly near the map edges.
  • the resulting 2D map provides a useful and practical way to visualize and interact with spatial data, making complex 3D information accessible and interpretable for various applications.
  • the projection from 3D to 2D technique enables the navigation system to overcome the time consuming and costly process of working with 3D maps.
  • Frequently accessed sub-maps might be kept in a cache for quicker access, reducing the need for constant loading and unloading from the main storage.
  • predictive algorithms can be used to pre-load sub-maps that the navigation system is likely to need soon, based on their current direction and speed of movement.
  • the retrieval system should operate on a smart, demand-based loading mechanism, REFERENCE NO: TPI-079PCT US PATENT APPLICATION ensuring that only the necessary parts of the map are in memory at any time, thereby optimizing performance and memory usage. This system is decisive for handling large- scale map data efficiently.
  • An AI-driven map retrieval technique can significantly enhance the loading and unloading of sub-maps based on the user's current location and context.
  • Spatial indexing technique is one potential advanced concept that can enhance the retrieval technique efficiency by managing and accessing the map parts or sub-maps.
  • the system may employ spatial indexing techniques like Quadtree or R-trees. These indexing methods allow the system to quickly locate and retrieve the relevant sub-maps based on the user's current position and view.
  • Level of Detail (LOD) management in which the system can store multiple versions of each sub-map at varying resolutions in the system using different zoom levels. As the user zooms in or out, the system loads the appropriate LOD, ensuring that the map remains clear and informative without overloading the memory with unnecessary detail.
  • predictive loading algorithms may be used to determine which group of sub-maps the user might need next, which can greatly enhance the responsiveness of the system. These algorithms can analyze the user's current direction, speed, and typical usage patterns to preload sub-maps just beyond the current view.
  • an intelligent cache management strategy may be used for reducing load times and bandwidth usage. Frequently used sub-maps can be stored in a local cache.
  • the system should also implement a cache eviction policy, determining which sub-maps to keep and which to discard based on usage patterns.
  • a cache eviction policy determining which sub-maps to keep and which to discard based on usage patterns.
  • network optimization may include using data compression techniques to reduce the size of the sub-maps transmitted and implementing a robust error-handling mechanism to manage network failures or slow connections.
  • adaptive quality adjustments may be applied in scenarios with limited memory or bandwidth, so that the system can REFERENCE NO: TPI-079PCT US PATENT APPLICATION dynamically adjust the quality of the sub-maps. For instance, in a low-memory situation, the system could load lower-resolution sub-maps to conserve resources.
  • FIG.9 shows a system architecture that incorporates the above map retrieval techniques.
  • the navigation system uses any available data from the basic sensors such as IMU, GNSS, Odometer, and Barometer.
  • the data from the perception sensors are used to update the system.
  • Perception data can provide an update during the challenging area when manipulated and processed by the navigation system.
  • the online map built during this process may be used as a reference map together with the local map.
  • the retrieval system helps with improving and optimizing the memory management for the system.
  • the navigation filter update could happen using 2D maps or 3D maps during the process for the navigation states estimation.
  • the whole map that covered the area where the system is moving along is divided into sub-maps to be manageable in terms of size and number of features to be processed.
  • 2.5.6 Depth Estimation From Optical Samples [00161]
  • input information comes from the at least one optical sensor, such as perception sensor 112 or perception sensor 222 in the embodiments discussed above.
  • different methods may be employed for determining depth information for objects within the optical samples.
  • the upper branch shown in FIG. 10 represents estimating depth to provide depth data for objects detected within the optical samples.
  • the depth information can be estimated using neural network techniques as discussed below.
  • the middle branch in FIG.10 can be employed when depth readings are directly available from the at least one optical sensor, such as when using REFERENCE NO: TPI-079PCT US PATENT APPLICATION stereo sensors or when a stream of samples are available.
  • range, bearing and elevation for objects within the samples may be extracted and fed to the nonlinear state estimation technique.
  • Neural network techniques can also be applied.
  • the lower branch involves reconstructing a scene representing a local environment surrounding the platform based on information from the at least one optical sensor, including the depth information discussed above.
  • the reconstructed local map can be compared to the global map of the obtained map information and the correlations fed to the nonlinear state estimation technique.
  • neural network techniques can be employed during scene reconstruction. To help illustrate, two examples of scene reconstruction are shown in FIGs. 11 and 12, with the top view in each depicting the respective optical sensor samples and the bottom view depicting the reconstructed scenes.
  • the nonlinear state estimation technique also employs reference map information, such as the online map built as discussed throughout this disclosure, and motion sensor data for the platform.
  • odometry information and/or absolute navigational information can also be fed to the nonlinear state estimation technique if available. These additional sources of information are optional as indicated by the dashed boxes.
  • the nonlinear state estimation technique can then provide an integrated navigation solution for the platform or a revised integrated navigation solution, as indicated by the outputs of position, velocity and/or attitude.
  • the techniques of this disclosure can benefit from the use of neural networks when processing the optical samples.
  • neural networks and deep learning may help mitigate some of the drawbacks of other depth- from-vision techniques.
  • the depth information can be learned from stereo images or a stream of images from a monocular camera, so that neural network and deep learning is used to estimate the depth in real-time using either the same sensor used during training or a different one.
  • a stereo optical sensor may be used during training and a monocular optical sensor may be used in real-time.
  • the term “deep neural network” refers to a neural network that has multiple layers between the input and output layers.
  • One suitable deep neural network is a convolutional neural network (CNN) as schematically represented in FIG.13.
  • CNN convolutional neural network
  • the alternating operations of using convolutions to produce feature maps and reducing the REFERENCE NO: TPI-079PCT US PATENT APPLICATION dimensionality of the feature maps with subsamples leads to a fully connected output that provides the classification of the object.
  • the depth of the neural network is governed by the number of filters used in the convolution operations, such that increasing the number of filters generally increases the number of features that can be extracted from the optical sample.
  • Another suitable deep neural network is a recurrent neural network (RNN) as schematically represented in FIG.14.
  • FIG. 1 The left side of the figure shows the input, x, progressing through the hidden state, h, to provide the output, o.
  • U, V, W are the weights of the network.
  • the connections between nodes form a directed graph along a temporal sequence as indicated by the unfolded operations on the right side of the figure.
  • 3 System Integration [00164] From the above discussion, it will be appreciated that embodiments of the state estimation techniques of this disclosure may employ a reference map, such as the online map that is built, and a local map for its operation.
  • the reference map whether an online semi-global map or a pre-existing global map, may be a group of sub-maps from the large map that covers the whole area.
  • the online (semi-global) map can be considered as the reference map.
  • the online map is large enough to warrant usage of the storage and retrieval operations discussed above, it can be divided into sub-maps and one of the sub-maps will be used as the reference map for the navigation filter.
  • the selection of the portion of the reference map to be passed to the navigation filter is based on the current location.
  • Reference maps can be built in real-time during the navigation session or offline.
  • the local map is defined as the map generated from the current detections form the perception sensors based on the current navigation states.
  • Navigation filter measurement model used in the state estimation technique can use different maps from different perception resources, whether 2D or 3D maps.
  • the 2D maps can be obtained from the 3D maps using the projection technique.
  • the navigation filter can work with different combination of map’s sources.
  • the filter can use the map built from same perception sensors or a combination from different perception sensors.
  • the navigation filter can use 2D/3D online map from radar while using 2D/3D local map from radar or vision sensors.
  • the navigation filter can use 2D/3D online map from vision while using 2D/3D local REFERENCE NO: TPI-079PCT US PATENT APPLICATION map from radar or vision sensors.
  • the local map size depends on the number of detections returned from the perception sensor per scan. [00166]
  • the navigation system in this work has benefits over other techniques that use maps.
  • the system has benefits over perception-based odometry solution (for example, Visual Odometery (VO) or Radar Odometery (RO)) or perception-based inertial odometry solution (for example, Visual Inertial Odometery (VIO)).
  • the system may provide an absolute position for update while the other mentioned methods provide a relative information.
  • Another characteristic of the system as discussed above is the ability to decide to create or build the surrounding map based on favorable conditions being satisfied. For example, if the system is utilized in an area with good environment conditions such as good GNSS signal, it may not need to build a map for the environment.
  • the system can work without a pre-built map. The system can build the online map for the surrounding area during the navigation session.
  • a nonlinear measurement model of the perception samples is used to directly update the nonlinear state estimation technique used to provide the integrated navigation solution.
  • the perception sensor data comprises information for a given sample covering the field of view of the perception sensor.
  • measurement is usually along the azimuth angle. Therefore, ⁇ ⁇ ⁇ represents the measured range at ⁇ ⁇ bearing angle.
  • the Markov property is there is no dependence REFERENCE NO: TPI-079PCT US PATENT APPLICATION between the error in measurements over time.
  • the aim is to model the probability of a measurement denoted by ⁇ ⁇ , given the knowledge of the map ⁇ , and the state of the vehicle at time t denoted by ⁇ ⁇ .
  • the probability of the measurement vector ⁇ ⁇ may be represented as ⁇ ( ⁇
  • ⁇ , ⁇ ) ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , ⁇ .
  • the probability of a sample is represented as the multiplication between the probability of each individual measurement given the knowledge of the map and the state of the vehicle.
  • One suitable perception measurement model is a range-based measurement model. Based on the true range between a static perception sensor and a static target, the model describes the error distribution of the depth data from the estimated depth or the depth readings for this range (given that multiple measurements of the same static target are available), as expressed by the probability density function: ⁇ ⁇ ⁇ ⁇ , ⁇ . In other words, given the knowledge of the map (whether feature-based or location map) and an estimate of the state of the platform, what is the probability of the measured range.
  • refers to the ⁇ ⁇ range at a certain ( azimuth/elevation) angle.
  • ⁇ , ⁇ ) from ⁇ ⁇ ⁇ ⁇ , ⁇ is a matter of multiplication of probabilities of all ranges.
  • 3.1.1.1 Ray Casting [00172]
  • the true range to the target in the map (this may include a feature-based map, location-based map or both) may be computed based REFERENCE NO: TPI-079PCT US PATENT APPLICATION on the estimate of the state of the platform. This is done by using ray casting (or ray tracing) algorithms, denoting the true range by ⁇ ⁇ , ⁇ ⁇ .
  • a ray may be simulated to move in a straight line until it either hits a target in the map or exceeds a certain distance.
  • the ray’s direction in 3D is based on the reported state of the platform (which may include position and orientation and may also be termed “pose”) and the bearing of this specific measurement.
  • a conversion between the perception sensor coordinate frame and the vehicle coordinate frame may establish the starting point and direction of the ray relative to the state of the vehicle.
  • the true range ⁇ ⁇ , ⁇ ⁇ from the perception sensor to the target may be found. It is important to note that a target must be in the map for this operation to make sense.
  • FIG. 15 A schematic depiction of an architecture using a range-based perception model to estimate the probability of the current state of the platform is shown in FIG. 15.
  • particle state refers to the state of the vehicle at time ⁇ .
  • the input to the system is the perception range measurements along with their respective bearing and elevation angles, such as from depth data from estimated depth or the depth readings.
  • ray casting may be used to estimate ⁇ ( ⁇
  • the belief in the current state is proportional to the probability of the measurement given the state ⁇ ⁇ and the map, times the prior probability of the previous state denoted by ⁇ ( ⁇ ⁇ ).
  • the sources of range errors may be separated into three categories; the first source of errors is environmental factors affecting the perception sensor, the second source of REFERENCE NO: TPI-079PCT US PATENT APPLICATION errors are inherent in the sensor itself and the third source of errors is related to the dynamics of the vehicle relative to the target.
  • the measurement error due to a specific error source may be modeled as a Gaussian distribution with mean ⁇ ⁇ , ⁇ ⁇ and standard deviation denoted by ⁇ ⁇ .
  • the distribution between the minimum range denoted by ⁇ ⁇ and the maximum range of the perception sensor denoted by ⁇ ⁇ (i.e., an perception sensor can measure within a limited range) may be limited.
  • the probability distribution of the perception measurement model can be modelled as for the range [ ⁇ , ⁇ ] and zero otherwise, where the numerator refers to a normally distributed random variable with mean ⁇ ⁇ , ⁇ ⁇ and standard deviation ⁇ ⁇ : ⁇ ⁇ ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ [00176] probability density function ⁇ ( ⁇
  • building the AMM model may involve identifying the mean and variance of each source of error. Once these parameters are estimated, an approximate PDF for the current measurement may be built. To so do, either a field expert’s intuitions or design experiments may be used to collect data depending on the source of error and then attempt to find the best normal distribution that would fit the collected data. Once the fittest distribution is found, the mean and variance of the error source under investigation may be extracted. This mean and variance can then be saved for us to use when the same conditions road conditions apply.
  • the perception sensor position in the global frame may be denoted ⁇ , ⁇ , ⁇ , ⁇ , ⁇ , ⁇ , which can be correlated to a position in the map information.
  • the next step is to project the position of the perception sensor in the global frame based on the current measurement ⁇ ⁇ ⁇ . By doing so, the position of the object that was detected by the perception sensor resulting in the current measurement is estimated in the global frame.
  • particle state refers to the pose of the vehicle at time ⁇ .
  • the input to the system is the perception range measurements along with their respective bearing and elevation angles.
  • the perception sensor’s pose is projected in 3D space for each measurement. Based on the distance to the nearest objects from each projection, ⁇ ( ⁇
  • the probability of the current state denoted by ⁇ ( ⁇ ) is correspondingly proportional to the probability of the measurement given the state ⁇ ⁇ and the map, times the prior probability of the previous state denoted by ⁇ ( ⁇ ⁇ ).
  • the error compensation techniques discussed above for a range- based measurement model such as those discussed in Section 3.1.1.2, may also be applied to the NOL measurement model.
  • Yet another suitable perception measurement model is a map matching model that features the capability of considering objects that are not detected by the perception sensor by matching between a local map and a reference map, such as a conventional global map or an online map built according to the techniques of this disclosure, when assessing the likelihood of a sample given knowledge of the platform state and the map.
  • the local map is defined as a map created based on perception sensor samples, such as through scene reconstruction as discussed above.
  • the reference map REFERENCE NO: TPI-079PCT US PATENT APPLICATION can either be a feature-based or location-based map as discussed above.
  • ⁇ ⁇ a grid-map of the environment encompassing the platform
  • ⁇ ⁇ the measurement ⁇ ⁇ is converted into a local grid-map denoted by ⁇ ⁇ .
  • the local map must be defined in the global coordinate frame using the rotational matrix ⁇ ⁇ ⁇ .
  • the representation of what is in the grid cells of both the reference and local map may reflect whether an object exists in this cell (Occupancy Grid Map (OGM)).
  • OGM Operacupancy Grid Map
  • the center of a grid cell in the reference and local map can be denoted by ⁇ ⁇ and ⁇ ⁇ ⁇ , ⁇ ⁇ , ⁇ respectively.
  • the linear correlation can be used to indicate the of the current local map matching the reference map, given the current state of the platform.
  • the correlation coefficient between the local and reference map is then represented by: ⁇ ( ⁇
  • the correlation may be assumed to be equal to 0 when only the existence of positive correlation or no correlation at all is relevant, allowing the likelihood of the measurement to be represented as ⁇ ( ⁇
  • FIG. 18 A schematic illustration of one possible architecture for a perception map matching measurement model that may be used to estimate belief in the current state of the platform is depicted in FIG. 18.
  • particle state refers to the state of the platform at time ⁇ .
  • the input to the system is the perception range measurements along with their respective bearing and elevation angles.
  • a local map e.g., a 2D or 3D occupancy grid
  • ⁇ ⁇ may be built.
  • the correlation between the reference map and the local map may be computed and used as an indicator for ⁇ ( ⁇
  • the belief in the current state is proportional to the probability of REFERENCE NO: TPI-079PCT US PATENT APPLICATION the measurement given the state ⁇ ⁇ and the map, times the prior probability of the previous state denoted by ⁇ ).
  • the error compensation techniques discussed a range-based measurement model such as those discussed in Section 3.1.1.2, may also be applied to the map matching measurement model.
  • a further example of perception measurement models that may be used in the techniques of this disclosure are stochastic, closed-form models that relate the perception measured ranges to the true ranges as a function of the states of the platform as compared to the probabilistic approaches discussed above.
  • the closed-form perception measurement models do not include a deterministic relation between the measured range and the true range.
  • a first correspondence/association is determined by assuming that objects detected by the perception sensor are identified uniquely as objects in the map. Given an estimate of the range to an object and knowing which object is detected by the perception sensor in the map provides the correspondence.
  • the map was represented as a feature-map containing a set of objects
  • every object has a unique signature with respect to the perception sensor, and hence the object detected can be inferred by comparing the perception sensor signature to the object signature, and if they match, then it may be assumed the object detected by the perception sensor is the object that maximizes the correlation between the perception signature and a specific map object. If signatures from several objects leads to the same (or close to) correlation vector, the search area can be limited to a smaller cone centered around the reported azimuth and elevation angle of the perception sensor.
  • ⁇ ⁇ , ⁇ ⁇ is the error free range to the detected object and ⁇ ⁇ , ⁇ ⁇ angles are the azimuth and pitch of the vehicle at time ⁇ .
  • FIG.19 One embodiment of a closed-form perception measurement model is schematically depicted in FIG.19.
  • the absolute positioning of the objects and their ranges may be used to build a closed form measurement model as a function of the states of the platform.
  • FIG. 20 Another embodiment of a closed-form measurement model that employs REFERENCE NO: TPI-079PCT US PATENT APPLICATION information from radar and another type of perception sensor is schematically depicted in 20.
  • Suitable types of perception sensors include an optical camera, a thermal camera and an infra-red imaging sensor. Images or other samples from the perception sensors may be used to detect and classify objects. A first correspondence is then determined by associating the ranges from the radar with the classified objects. Next, a second correspondence is determined between the objects detected and classified by the perception sensor (labelled with ranges) and objects in the reference map, such as the online map built according to the techniques of this disclosure. Resolving the camera/map correspondence leads to knowing the position of objects in the global frame (absolute position).
  • the absolute positioning of the objects and their ranges may be used to build a closed form measurement model as a function of the states of the platform.
  • a state estimation technique to provide the navigation solution that integrates perception sensor data with the motion sensor data.
  • the following materials discuss exemplary nonlinear system models as well as using another integrated navigation solution through another state estimation technique.
  • a nonlinear error-state model can be used to predict the error-states and then use the error-states to correct the actual states of the vehicle.
  • a linearized error-state model may be used.
  • a nonlinear total-state model can be used to directly estimate the states of the vehicle, including the 3D position, velocity and attitude angles.
  • the solution from another state estimation technique (another filter) that integrates INS and GNSS (or other source absolute navigational information) systems to feed the system model for the state estimation technique at hand.
  • another state estimation technique another filter
  • INS and GNSS or other source absolute navigational information
  • 3D navigation solution is provided by calculating 3D position, velocity and attitude of a moving platform.
  • the relative navigational information includes motion sensor data obtained from MEMS- based inertial sensors consisting of three orthogonal accelerometers and three REFERENCE NO: TPI-079PCT US PATENT APPLICATION orthogonal gyroscopes, such as sensor assembly 106 of device 100 in FIG.1.
  • host processor 102 may implement integration module 114 to integrate the information using a nonlinear state estimation technique, such as for example, Mixture PF having the system model defined herein below.
  • the state of device 100 whether tethered or non-tethered to the moving platform is x ⁇ ⁇ , ⁇ k , E N U k k k k k k k k ⁇ T k k h , v , v , v , p , r , A ⁇ , where ⁇ k is the latitude of the E vehicle, ⁇ k is the longitude,h k is the altitude, vk is the velocity along East direction, N i s the velocity along North direction, k is the velocity along Up vertical direction, p k is the pitch angle, r k is the roll angle, and A k is the azimuth angle.
  • the motion model is used externally in what is called inertial mechanization, which is a nonlinear model as mentioned earlier, the output of this model is the navigation states of the device, such as position, velocity, and attitude.
  • the state estimation or filtering technique estimates the errors in the navigation states obtained by the mechanization, so the estimated state vector by this state estimation or filtering technique is for the error states, and the system model is an error-state system model which transition the previous error-state to the current error-state.
  • the mechanization output is corrected for these estimated errors to provide the corrected navigation states, such as corrected position, velocity and attitude.
  • the estimated error- state is about a nominal value which is the mechanization output, the mechanization can operate either unaided in an open loop mode, or can receive feedback from the corrected states, this case is called closed-loop mode.
  • the error-state system model commonly used is a linearized model (to be used with KF-based solutions), but the work in this example uses a nonlinear error-state model to avoid the linearization and approximation.
  • ⁇ ⁇ ⁇ ( ⁇ , ⁇ )
  • ⁇ ⁇ the control input which is the inertial sensors readings that correspond to transforming the state from time epoch k ⁇ 1 to time epoch k , this will be the convention used in this explanation for the sensor readings for nomenclature purposes.
  • REFERENCE NO: TPI-079PCT US PATENT APPLICATION The nonlinear error-state system model (also called state transition model) is given by ⁇ where ⁇ ⁇ is the process past and present states and accounts for the uncertainty in the platform motion and the control inputs.
  • the inertial frame is Earth-centered inertial frame (ECI) centered at the center of mass of the Earth and whose the Z-axis is the axis of rotation of the Earth.
  • the Earth-centered Earth-fixed (ECEF) frame has the same origin and z-axis as the ECI frame but it rotates with the Earth (hence the name Earth-fixed).
  • Mechanization is a process of converting the output of inertial sensors into position, velocity and attitude information. Mechanization is a recursive process which processes the data based on previous output (or some initial values) and the new measurement from the inertial sensors.
  • Initialization procedures may be tailored to the specific application. First, the initialization of position and velocity will be discussed. In some applications, position may be initialized using a platform’s last known position before it started to move, this may be used in applications where the platform does not move when the navigation system is not running. For the systems where inertial sensors are integrated with absolute navigational information, such as for example GNSS, initial position may be provided by the absolute navigation system.
  • the starting point may be known a priori (pre-surveyed location) which can be used as an initial input.
  • Velocity initialization may be made with zero input, if the platform is stationary. If moving, the velocity may be provided from an external navigation source such as for example, GNSS or odometer.
  • attitude initialization when the device is stationary, accelerometers measure the components of reaction to gravity because of the pitch and roll angles (tilt from horizontal plane).
  • the accelerometers measurement is given by: REFERENCE NO: TPI-079PCT US PATENT APPLICATION r ⁇ ⁇ ⁇ ⁇ ⁇ where g is the the X, Y, and Z directions are utilized, the pitch and the roll angles can be calculated as follows: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ [00200]
  • the averaging can be used on the accelerometer readings to suppress the motion components, then the above formulas may be used with the averaged accelerometers data to initialize pitch and roll.
  • the length of the time frame used for averaging may depend on the application or the mode where the navigation system works (such as for example walking or driving).
  • the motion obtained from these sources may be decoupled from the accelerometer readings, so that the remaining quantity measured by these accelerometers are components of gravity, which enables the calculation of pitch and roll as follows: ⁇ ⁇ p ⁇ tan ⁇ 1 ⁇ f y ⁇ Acc ⁇ ⁇ 2 2 ⁇ ⁇ ⁇ ⁇ where the speed and navigational information are REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00202] For the azimuth angle, one possible way of initializing it is to obtain it from the absolute navigational information.
  • the azimuth angle can be calculated as follows: ⁇ E ⁇ ⁇ ⁇ This initial azimuth is the moving and it may be used together with the initial misalignment (such as estimating pitch misalignment with absolute velocity updates as described below) to get the initial device heading. [00203] If velocity is not available from the absolute navigation receiver, then position differences over time may be used to approximate velocity and consequently calculate azimuth. In some applications, azimuth may be initialized using a platform’s last known azimuth before it started to move, this may be used in applications where the platform does not move when the navigation system is not running.
  • Attitude Equations [00204]
  • One suitable technique for calculating the attitude angles is to use quaternions through the following equations.
  • the relation between the vector of quaternion parameters and the rotation matrix from body frame to local-level frame is as follows: 0.25 ⁇ l, ⁇ (3, l, ⁇ ⁇ ⁇ ⁇ , ⁇ 2) ⁇ , ⁇ (2,3) ⁇ / ⁇ ⁇ ⁇ 0.2 l, ⁇ l, ⁇ ⁇ ⁇ 5 ⁇
  • the as well as the The latter two are monitored by the gyroscope and form a part of the readings, so they have to be removed in order to get the actual turn.
  • the skew symmetric matrix may be calculated as follows: REFERENCE NO: TPI-079PCT US PATENT APPLICATION ⁇ ⁇ x ⁇ ⁇ ⁇ l T matrix is calculated as follows: , Mech ⁇ ⁇ l , Mech ⁇ b , Mech ⁇ ⁇ ⁇ Mech ⁇ 2 ⁇ ⁇ ⁇ 2 ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ 2 exponential of a matrix implemented numerically or calculated as follows: ⁇ sin ⁇ Mech ⁇ 2 ⁇ 1 ⁇ cos ⁇ Mech ⁇ ⁇ ⁇ ⁇ , as mentioned above.
  • Position and Velocity Equations [00207] Next, position and velocity may be calculated according to the following discussion.
  • GRS Geographic Reference System
  • one suitable calculation for the latitude may be as follows: d ⁇ Mech v N , Mech ⁇ t Similarly, M ech E , Mech ⁇ Mech ⁇ ⁇ Mech ⁇ d ⁇ ⁇ ⁇ ⁇ Mech ⁇ v k ⁇ 1 ⁇ t
  • equations for attitude, velocity and position may be implemented differently, such as for example, using a better numerical integration technique for position. Furthermore, coning and sculling may be used to provide more precise mechanization output.
  • the system model is the state transition model and since this is an error state system model, this system model transitions from the error state of the previous iteration k ⁇ 1 to the error state of the current iteration k .
  • the error state vector has to be described first. The error state st of the errors in the navigation states, the errors in the rotation matrix R ⁇ consi b that transforms from the device body frame to the local-level frame, the errors in the sensors readings (i.e. the errors in the control input).
  • the errors in the navigation states are ⁇ , ⁇ , ⁇ h , ⁇ v E , ⁇ v N , ⁇ v U , ⁇ p , ⁇ r , ⁇ T ⁇ k k k k k k k k k k ⁇ A k ⁇ , which are the errors in latitude, REFERENCE NO: TPI-079PCT US PATENT APPLICATION longitude, altitude, velocity along East, North, and Up directions, pitch, roll, and y.
  • the errors in R ⁇ azimuth, respectivel b are the errors in the nine elements of this 3 ⁇ 3 matrix, the ⁇ 3 matrix of the errors will be called ⁇ R ⁇ 3 b .
  • the errors associated with the different control inputs are the stochastic errors in accelerometers readings, and ⁇ x k , k , are stochastic errors in gyroscopes readings.
  • Modeling Sensors’ Errors A system model for the sensors’ errors may be used.
  • the traditional model for these sensors’ errors in the literature is the first order Gauss Markov model, which can be used here, but other models can be used as well.
  • a higher order Auto-Regressive (AR) model to model the drift in each one of the inertial sensors may be used and is demonstrated here.
  • the state vector has to be augmented with a number of elements equal to the order of the AR model (which is 120). Consequently, the covariance matrix, and other matrices used by the KF will increase drastically in size (an increase of 120 in rows and an increase of 120 in columns for each inertial sensor), which make this difficult to realize.
  • the stochastic gyroscope drift is modeled by any model such as for example Gauss Markov (GM), or AR, in the system model, the state vector REFERENCE NO: TPI-079PCT US PATENT APPLICATION has to be augmented accordingly.
  • GM Gauss Markov
  • the normal way of doing this augmentation will lead to, for example in the case of AR with order 120, the addition of 120 states to the state vector. Since this will introduce a lot of computational overhead and will require an increase in the number of used particles, another approach is used in this work.
  • the flexibility of the models used by PF was exploited together with an approximation that experimentally proved to work well.
  • the state vector in PF is augmented by only one state for the gyroscope drift. So at the k-th iteration, all the values of the gyroscope drift state in the particle population of iteration k-1 will be propagated as usual, but for the other previous drift values from k-120 to k-2, only the mean of the estimated drift will be used and propagated.
  • the errors in the rotation matrix that transforms from the device body frame to the local-level frame may be modeled according to the following discussion.
  • R ⁇ As menti b is the rotation matrix that transforms from the device body me to the local-level frame.
  • the following steps get the error states of the R ⁇ fra b from all the error states of the previous iteration, therefore this part of the system model gets full error in R ⁇ the b and not an approximation or a linearization.
  • the discrete version of the derivative of the corrected velocity can be calculated as follows: ⁇ ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ tem ere ⁇ p wh k is the t emp ⁇ l , Mech ⁇ l l , Mech ie, k ⁇ el , k ⁇ ⁇ ⁇ il , k ⁇ ⁇ il , k ⁇ ⁇ il , k .
  • ⁇ g can be calculated as follows: ⁇ g Mech 4 Mech 2 M ech M k ⁇ a 1 ⁇ 1 ⁇ a 2 sin ⁇ ⁇ k ⁇ ⁇ a 3 sin ⁇ ⁇ k ⁇ ⁇ ⁇ a 4 ⁇ a 5 sin ⁇ ⁇ k ⁇ ⁇ h k ⁇ a 6 ⁇ h ech k ⁇ ⁇ ⁇ 1 ⁇ 2 ⁇ Mech ⁇ ⁇ 4 Mec ⁇ ⁇ ⁇ ⁇ ⁇ h ⁇ ⁇ ⁇ ⁇ REFERENCE NO: TPI-079PCT US PATENT APPLICATION 2 1
  • the error in the velocity can be calculated as follows: ⁇ ⁇ ⁇ Position Errors [00217]
  • a set of common reference frames is used in this example for demonstration purposes, other definitions of reference frames may be used.
  • the body frame of the vehicle has the X-axis along the transversal direction, Y-axis along the forward longitudinal direction, and Z-axis along the vertical direction of the vehicle completing a right-handed system.
  • the local-level frame is the ENU frame that has axes along East, North, and vertical (Up) directions.
  • the inertial frame is Earth-centered REFERENCE NO: TPI-079PCT US PATENT APPLICATION inertial frame (ECI) centered at the center of mass of the Earth and whose the Z-axis is the axis of rotation of the Earth.
  • the center of a grid cell in the reference and local map can be denoted by ⁇ ⁇ ⁇ , ⁇ , ⁇ and ⁇ ⁇ ⁇ , ⁇ ⁇ , ⁇ respectively.
  • the linear correlation can be used to indicate the likelihood of the current local map matching the reference map, given the current state of the vehicle. Let us denote to the mean of the relevant section of the reference map by ⁇ ⁇ and the mean of the local map by ⁇ ⁇ .
  • ⁇ , ⁇ ) ⁇ , ⁇ ⁇ , ⁇ , ⁇ ⁇ ⁇ ⁇ , ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ , ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ ) [00249] - a positive correlation or no correlation at all is significant so it can be assumed all negative correlations equal 0, allowing the likelihood of a perception sample to be represented as ⁇ ( ⁇
  • ⁇ , ⁇ ) can be directly used to weight the importance of a particle with known- state in a map.
  • Perception Map-Matching measurement model
  • the PF is initialized by generating ⁇ particles using a random distribution (could be within a certain confined distance from the initial state).
  • the system model is used to propagate the state of the ⁇ particles based on the inputs from the inertial sensors and the proposed system model.
  • the basic SIR PF filter has certain limitations because the samples are only predicted from the system model and then the most recent observation is used to adjust the importance weights of this prediction.
  • Mixture PF allows the addition of further samples predicted from the most recent observation in addition to the samples predicted from the system model. The importance weights of these new samples are adjusted according to the probability that they came from the samples of the last iteration and the latest control inputs.
  • a method may be employed that generates particles drawn from the measurement model.
  • a measure of how aligned the perception measurement given a specific state is by computing the probability of most recent measurement denoted by ⁇ ( ⁇
  • several matches can be found by matching the local map to the reference map at different states and savings states where the match between the local map and the reference map results in a high correlation factor. If such states are found, it is possible to generate particles drawn from the measurement model rather than the system model.
  • the search space (infinite number of states) REFERENCE NO: TPI-079PCT US PATENT APPLICATION within the reference map should be considered to effectively apply constraints that limit the search space and consequently reduce computational complexity.
  • the importance weights of these new samples are adjusted according to the probability that they came from the samples of the last iteration and the latest control inputs.
  • the closed form perception model is a non-probabilistic modelling approach that assumes objects detected by the perception sensor can be identified uniquely as objects in the map. Given an estimate of the range to an object and knowledge of which object is detected by the perception sensor in the map, a correspondence may be determined that relates the measurements to the states of the platform in the closed-form model.
  • perception sensor raw data is used and is integrated with the inertial sensors.
  • the perception sensor raw data used in the present navigation module in this example are ranges.
  • the ranges and range-rates can be used as the measurement updates to update the position and velocity states of the vehicle.
  • the measurement model that relates these measurements to the position and velocity states is a nonlinear model.
  • the KF integration solutions linearize this model.
  • PF with its ability to deal with nonlinear models may provide improved performance for tightly- coupled integration because it can use the exact nonlinear measurement model, in addition to the fact that the system model may be a nonlinear model.
  • ⁇ m ⁇ ⁇ [00256]
  • a suitable nonlinear perception range model for M detected objects is: REFERENCE NO: TPI-079PCT US PATENT APPLICATION ⁇ ⁇ ⁇ M ⁇ ⁇ ⁇ [00257] Since the position state x in the above equation is in ECEF rectangular coordinates, it may be translated to Geodetic coordinates for the state vector used in the Mixture PF. The relationship between the Geodetic and Cartesian coordinates is given by: ⁇ x ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , where R N is the normal radius of curvature of the Earth’s ellipsoid and e is the eccentricity of the Meridian ellipse.
  • the range model for Geodetic coordinates may be represented by: 1 ⁇ ⁇ R ⁇ h cos ⁇ cos ⁇ ⁇ x 1 2 ⁇ R ⁇ h cos ⁇ sin ⁇ ⁇ y 1 2 2 ⁇ R 1 ⁇ e 2 ⁇ h sin ⁇ ⁇ z 1 ⁇ b ⁇ ⁇ ⁇ 1 ⁇ ⁇ ⁇ ⁇ c ⁇ ⁇ ⁇ N ⁇ ⁇ ⁇ ⁇ N ⁇ ⁇ ⁇ N ⁇ ⁇ ⁇ ⁇ N ⁇ ⁇ ⁇ ⁇ r ⁇ ⁇ ⁇ ⁇ M ⁇ ⁇ ⁇ ⁇ r ⁇ m ⁇ 1 m ( v ⁇ v m m m m x x ) ⁇ 1 y ( v y ⁇ v y ) ⁇ 1 z ( v z ⁇ v z ) .
  • the transformation uses the rotation matrix from the evel frame to ECEF ( R e local-l ⁇ ) and is as follows: ⁇ v x ⁇ ⁇ v e ⁇ ⁇ ⁇ sin ⁇ ⁇ sin ⁇ cos ⁇ cos ⁇ cos ⁇ ⁇ ⁇ v e ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ n ⁇ ⁇ ⁇ ⁇ ⁇ object to the perception sensor will be expressed as follows: ⁇ T ⁇ ⁇ R N ⁇ h ⁇ cos ⁇ cos ⁇ ⁇ x ⁇ , ⁇ ⁇ R ⁇ h ⁇ cos ⁇ sin ⁇ ⁇ y m , ⁇ R ⁇ 1 ⁇ e 2 ⁇ ⁇ h ⁇ sin ⁇ ⁇ z m ⁇ ⁇ 1 m ⁇ ⁇ N N ⁇ 2 ⁇ perception detected objects.
  • the measurement model is a nonlinear model that relates the difference between the mechanization estimate of the ranges and range-rates and the perception sensor raw measurements (range measurements and range-rates) at a time REFERENCE NO: TPI-079PCT US PATENT APPLICATION epoch k, ⁇ ⁇ , to the states at time k, ⁇ ⁇ , and the measurement noise ⁇ ⁇ .
  • the part of the measurement model for the range-rates is: ⁇ ⁇ , ⁇ ⁇ , ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ REFERENCE NO: TPI-079PCT US PATENT APPLICATION ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ [00263] Furthermore, the mechanization version of the line-of-sight unit vector from ⁇ ⁇ detected object to perception sensor receiver is expressed as follows: ⁇ ⁇ Mech ⁇ m ⁇ ⁇ Mech T ⁇ m ⁇ ⁇ Mech ⁇ m ⁇ 2 ⁇ where the receiver position from mechanization is as defined above.
  • the corrected (or estimated) version of the line-of-sight unit vector from ⁇ ⁇ detected object to perception sensor receiver is expressed as follows: ⁇ ⁇ x Corr ⁇ x m ⁇ , ⁇ y Corr ⁇ y m , z Corr T ⁇ z m ⁇ ⁇ k k k k ⁇ ⁇ k k ⁇ 1 m , Corr ⁇ ⁇ 2 ⁇ , 3.3.4.2
  • Example 2 Measurement Model for (1) Total-State System Model, (2) System Model With Another Integration Filter
  • the measurement model of the current nonlinear state estimation technique is a nonlinear model that relates the perception sensor raw measurements (range measurements and range-rates) at a time epoch k, ⁇ ⁇ , to the states at time k , ⁇ ⁇ , and the measurement noise ⁇
  • the position of the vehicle maybe estimated by matching the current sample from the perception sensor with a surveyed database of samples, where each sample is associated with a state.
  • the sample that results in the highest match indicator e.g., correlation factor
  • Another approach is to use unique features in the map that can be detected by the perception sensor, and once detected, a position can be inferred. For example, if the perception sensor detects a very specific distribution of road signs across its field of view, the equivalent geometric distribution of signs can be searched for in the map and thereby infer position based on the perception sensor map match, previous estimate of the platform position and other constraints.
  • Motion constraints like non-holonomic constraints can be applied to limit the search space for a match within the map.
  • These loosely-coupled approaches may be employed with the error-state or the total-state system model.
  • the loosely- coupled integration uses position and velocity updates from the perception sensor/map estimator.
  • the measurements are given as ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ noise in the perception 3.4.2 Perception Sensor Doppler Shift Update [00269]
  • One of the main observables for some types of perception sensors is the doppler information associated with each target. This raw data is independent from the perception sensor range estimation.
  • the incoming frequency at the sensor receiver is not exactly the frequency of the reflected signal by the target but is shifted from the original value transmitted by the perception sensor. This is called the Doppler shift, and it is due to relative motion between the object/target and the perception sensor receiver.
  • [ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ] is the ⁇ object velocity in the ECEF frame
  • [ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ] true receiver velocity in the ECEF frame
  • is the perception sensor’s transmitted frequency
  • ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ is the true line-of-sight vector reflection from the ⁇ ⁇ object to the receiver.
  • misalignment may refer to either a mounting misalignment when the device is strapped to the platform or a varying REFERENCE NO: TPI-079PCT US PATENT APPLICATION misalignment when the device is non-strapped.
  • an optional misalignment procedure may be performed to calculate the relative orientation between the frame of the sensor assembly (i.e. device frame) and the frame of the moving platform.
  • the device heading, pitch, and roll can be different than the heading, pitch, and roll of the platform (attitude angles of the platform) and to get a navigation solution for the platform and/or device (processed on the device) with accuracy, the navigation algorithm should have the information about the misalignment as well as the absolute attitude of the platform. This misalignment detection and estimation is intended to enhance the navigation solution.
  • the platform attitude angles must be known. Since the device attitude angles are known, the misalignment angles between the device and platform frame are required to obtain the platform attitude angles.
  • Example 1 Heading Misalignment Using Absolute Velocity Updates
  • absolute velocity updates are used to estimate heading misalignment. In order to calculate the portable device heading from gyroscopes an initial heading of the device has to be known.
  • an absolute velocity source such as from GNSS
  • a magnetometer is available and with adequate readings, it will be used to get the initial device heading. If an absolute velocity source is available and if a magnetometer is not available or not with adequate readings, the velocity source will be used to get the initial heading of the moving platform, and a routine is run to get the initial heading misalignment of the portable device with respect to the moving platform (which is described below), then the initial device heading can be obtained. If an absolute velocity source is available and if a magnetometer is available and with adequate readings, a blended version of the initial device heading calculated from the above two options can be formed.
  • This example details a suitable routine to get the initial heading misalignment of the portable device with respect to the moving platform if an absolute velocity source is available (such as GNSS).
  • This routine needs: (i) a very first heading of the platform (person or vehicle or other) that can be obtained from the source of absolute velocity provided that the device is not stationary, (ii) the source of absolute velocity to be available for a short duration such as for example about 5 seconds.
  • a routine is run to get the initial device with respect to the moving platform (this routine is described below), then the initial device heading is obtained as, where a magnetometer is available and with adequate readings, a better blended version of the initial device heading calculated from the above- mentioned two options can be formed.
  • Example 3 Heading Misalignment Using Acceleration and Deceleration
  • the misalignment between a device and a platform REFERENCE NO: TPI-079PCT US PATENT APPLICATION may be determined from acceleration and/or deceleration of the platform, utilizing motion sensor data in the presence or in the absence of absolute navigational information updates. Details regarding suitable techniques may be found in commonly- owned U.S. Patent No. 9,797,727, issued October 24, 2017, which is hereby incorporated by reference in its entirety.
  • Example 4 Pitch Misalignment Using Absolute Velocity Updates
  • absolute velocity updates may be used to estimate pitch misalignment.
  • the device pitch angle can be different than the pitch angle of the platform because of mounting misalignment or varying misalignment when the device is non-strapped.
  • the pitch misalignment angle is calculated.
  • pitch misalignment angle is the difference between the device pitch angle and the pitch angle of the platform.
  • a state estimation technique is used.
  • a system model that can be used is a Gauss-Markov process, while measurements are obtained from GNSS velocity and accelerometers measurements and applied as a measurement update through the measurement model.
  • System pitch angle is calculated using accelerometers readings (where ⁇ ⁇ is forward accelerometer reading, ⁇ ⁇ is lateral accelerometer reading, and ⁇ ⁇ is vertical REFERENCE NO: TPI-079PCT US PATENT APPLICATION accelerometer reading) and a calculated forward acceleration of the platform.
  • Still further aspects of this disclosure relate to using other observables, including information from a GNSS positioning system and an odometer, as measurement updates in the state estimation technique when integrating the perception sensor data and the motion sensor data. These optional observations may be used to estimate a more accurate state.
  • INS/GNSS integration Three main types of INS/GNSS integration have been proposed to attain maximum advantage depending upon the type of use and choice of simplicity versus robustness, leading to three main integration architectures: loosely coupled, tightly coupled and ultra-tightly (or deeply) coupled. Loosely coupled integration uses an estimation technique to integrate inertial sensors data with the position and velocity output of a GNSS receiver.
  • the distinguishing feature of this configuration is a separate filter for the GNSS and is an example of cascaded integration because of the two filters (GNSS filter and integration filter) used in sequence.
  • Tightly coupled integration uses an estimation technique to integrate inertial sensors readings with raw GNSS data (i.e. pseudoranges that can be generated from code or carrier phase or a combination of both, and pseudorange rates that can be calculated from Doppler shifts) to get the vehicle position, velocity, and orientation.
  • pseudoranges i.e. pseudoranges that can be generated from code or carrier phase or a combination of both, and pseudorange rates that can be calculated from Doppler shifts
  • the loosely coupled integration scheme at least four satellites are needed to provide acceptable REFERENCE NO: TPI-079PCT US PATENT APPLICATION GNSS position and velocity input to the integration technique.
  • the advantage of the tightly coupled approach is that less than four satellites can be used as this integration can provide a GNSS update even if fewer than four satellites are visible, which is typical of a real life trajectory in urban environments as well as thick forest canopies and steep hills.
  • Another advantage of tightly coupled integration is that satellites with poor GNSS measurements can be detected and rejected from being used in the integrated solution.
  • Ultra-tight (deep) integration has two major differences with regard to the other architectures. Firstly, there is a basic difference in the architecture of the GNSS receiver compared to those used in loose and tight integration.
  • the information from INS is used as an integral part of the GNSS receiver, thus, INS and GNSS are no longer independent navigators, and the GNSS receiver itself accepts feedback. It should be understood that the present navigation solution may be utilized in any of the foregoing types of integration.
  • the state estimation or filtering techniques used for inertial sensors/GNSS integration may work in a total-state approach or in an error state approach, each of which has characteristics described above. It would be known to a person skilled in the art that not all the state estimation or filtering techniques can work in both approaches.
  • a first error state system model and total-state system model examples are described below that integrate absolute navigational information with an error-state system model.
  • a three-dimensional (3D) navigation solution is provided by calculating 3D position, velocity and attitude of a moving platform.
  • the relative navigational information includes motion sensor data obtained from MEMS-based inertial sensors consisting of three orthogonal accelerometers and three orthogonal gyroscopes, such as sensor assembly 106 of device 100 in FIG.1.
  • a source of absolute navigational information 116 is also used and host processor 102 may implement integration module 114 to integrate the information using a nonlinear state estimation technique, such as for example, Mixture PF.
  • the reference-based absolute navigational information 116 such as from a GNSS receiver, and the motion sensor data, such as from sensor assembly 102, are integrated using Mixture PF in either a loosely coupled, tightly coupled, or hybrid loosely/tightly coupled architecture, having a system and measurement model, REFERENCE NO: TPI-079PCT US PATENT APPLICATION wherein the system model is either a nonlinear error-state system model or a nonlinear total-state model without linearization or approximation that are used with the traditional KF-based solutions and their linearized error-state system models.
  • the filter may optionally be programmed to comprise advanced modeling of inertial sensors stochastic drift.
  • the filter may optionally be further programmed to use derived updates for such drift from GNSS, where appropriate.
  • the filter may optionally be programmed to automatically detect and assess the quality of GNSS information, and further provide a means of discarding or discounting degraded information.
  • the filter may optionally be programmed to automatically select between a loosely coupled and a tightly coupled integration scheme. Moreover, where tightly coupled architecture is selected, the GNSS information from each available satellite may be assessed independently and either discarded (where degraded) or utilized as a measurement update.
  • ⁇ ⁇ [ ⁇ ⁇ ⁇ ⁇ ⁇ , ⁇ , ⁇ , ⁇ , ⁇ , ⁇ , ⁇ , ⁇ ] ⁇
  • ⁇ ⁇ is the latitude of the vehicle
  • ⁇ ⁇ is the longitude
  • is the altitude
  • ⁇ ⁇ is the velocity along East direction
  • ⁇ ⁇ ⁇ is the velocity along North direction
  • ⁇ ⁇ ⁇ is the velocity along Up vertical direction
  • ⁇ ⁇ is the pitch angle
  • ⁇ ⁇ is the roll angle
  • ⁇ ⁇ is the azimuth angle.
  • the navigation module is utilized to determine a three-dimensional (3D) navigation solution by calculating 3D position, velocity and attitude of a moving platform.
  • the module comprises absolute navigational information from a GNSS receiver, relative navigational information from MEMS- based inertial sensors consisting of three orthogonal accelerometers and three orthogonal gyroscopes, and a processor programmed to integrate the information using a nonlinear state estimation technique, such as for example, Mixture PF having the system and measurement models defined herein below.
  • a nonlinear state estimation technique such as for example, Mixture PF having the system and measurement models defined herein below.
  • the present navigation module targets a 3D navigation solution employing MEMS-based inertial sensors/GNSS integration using Mixture PF.
  • ⁇ ⁇ is the measurement noise which is independent of the past and current states and the process noise and accounts for uncertainty in GNSS readings.
  • 3.6.1.1 Navigation Solution [ ⁇ ⁇ ⁇ ⁇ ⁇ , ⁇ , ⁇ , ⁇ , ⁇ , , ⁇ , ⁇ ] ⁇ , where ⁇ ⁇ is the latitude of the vehicle, ⁇ ⁇ is the longitude, ⁇ is the altitude, ⁇ ⁇ ⁇ is the velocity along East direction, ⁇ ⁇ ⁇ is the velocity along North direction, ⁇ ⁇ ⁇ is the velocity along Up vertical direction, ⁇ ⁇ is the pitch angle, ⁇ ⁇ is the roll angle, and ⁇ ⁇ is the azimuth angle.
  • the motion model is used externally in what is called inertial mechanization, which is a nonlinear model as mentioned earlier, the output of this model is the navigation states of the module, such as position, velocity, and attitude.
  • the state estimation or filtering technique estimates the errors in the navigation states obtained by the mechanization, so the estimated state vector by this state estimation or filtering technique is for the error states, and the system model is an error-state system model which transition the previous error-state to the current error- state.
  • the mechanization output is corrected for these estimated errors to provide the corrected navigation states, such as corrected position, velocity and attitude.
  • the estimated error-state is about a nominal value which is the mechanization output, the mechanization can operate either unaided in an open loop mode, or can receive feedback from the corrected states, this case is called closed-loop mode.
  • the error-state system model commonly used is a linearized model (to be used with KF-based solutions), but the work in this example uses a nonlinear error-state model to avoid the linearization and approximation.
  • ⁇ ⁇ ⁇ ( ⁇ , ⁇ ) REFERENCE NO: TPI-079PCT US PATENT APPLICATION
  • ⁇ ⁇ is the control input which is the inertial sensors readings that correspond to transforming the state from time epoch ⁇ 1 to time epoch ⁇ , this will be the convention used in this explanation for the sensor readings just used for nomenclature purposes.
  • Mixture PF which is the filtering technique used in this example
  • SIR PF Sampling/Importance Resampling
  • the samples are predicted from the system model, and then the most recent observation is used to adjust the importance weights of this prediction.
  • the Mixture PF adds to the samples predicted from the system model some samples predicted from the most recent observation.
  • the REFERENCE NO: TPI-079PCT US PATENT APPLICATION importance weights of these new samples are adjusted according to the probability that they came from the samples of the last iteration and the latest control inputs.
  • some samples predicted according to the most recent observation are added to those samples predicted according to the system model.
  • the most recent observation is used to adjust the importance weights of the samples predicted according to the system model.
  • the importance weights of the additional samples predicted according to the most recent observation are adjusted according to the probability that they were generated from the samples of the last iteration and the system model with latest control inputs.
  • the GNSS signal is not available, only samples based on the system model are used, but when GNSS is available both types of samples are used which gives better performance and thus leads to a better performance during GNSS outages. Also adding the samples from GNSS observation leads to faster recovery to true position after GNSS outages.
  • Measurement Model With Loosely-Coupled Integration [00298] When loosely-coupled integration is used, position and velocity updates are obtained from the GNSS receiver.
  • Tightly-coupled integration takes advantage of the fact that, given the present satellite-rich GPS constellation as well as other GNSS constellations, it is unlikely that all the satellites will be lost in any canyon. Therefore, the tightly coupled scheme of integration uses information from the few available satellites. This is a major advantage over loosely coupled integration with INS, which fails to acquire any aid from GNSS and considers the situation of fewer than four satellites as an outage.
  • GNSS raw data is used and is integrated with the inertial sensors.
  • the GNSS raw data used in the present navigation module in this example are pseudoranges and Doppler shifts. From the measured Doppler for each visible satellite, the corresponding pseudorange rate can be calculated.
  • the pseudoranges and pseudorange rates can be used as the measurement updates to update the position and velocity states of the vehicle.
  • the measurement model that relates these measurements to the position and velocity states is a nonlinear model.
  • the KF integration solutions linearize this model.
  • a pseudorange to a certain satellite is obtained by measuring the time it takes for the GPS signal to propagate from this satellite to the receiver and multiplying it by the speed of light.
  • the pseudorange measurement for the m th satellite is: ⁇ m ⁇ c ⁇ t ⁇ t ⁇ ere ⁇ m wh is the pseudorange observation from the m th satellite to receiver (in meters), t t is the transmit time, t r is the receive time, and c is the speed of light (in meters/sec).
  • the satellite and receiver clocks are not synchronized and each of them has an offset from the GPS system time. Despite the several errors in the pseudorange measurements, the most effective is the offset of the inexpensive clock used inside the receiver from the GPS system time.
  • the pseudorange measurement for the m th satellite is given as follows: ⁇ m ⁇ r m ⁇ c ⁇ t ⁇ c ⁇ t ⁇ cI m ⁇ cT m m r s ⁇ ⁇ ⁇ range the receiver antenna at time t r and the satellite antenna at time t t (in meters), ⁇ t r is the receiver clock offset (in seconds), ⁇ t s is the satellite clock offset (in , I m is the ionospheric del m ay (in is the ⁇ m ⁇ is the error in range due to a combination of receiver noise and other errors such as multipath effects and orbit prediction errors (in meters).
  • the incoming frequency at the GPS receiver is not exactly the L1 or L2 frequency but is shifted from the original value sent by the satellite. This is called the Doppler shift and it is due to relative motion between the satellite and the receiver.
  • This time difference may be approximately in the range of 70-90 milliseconds, during which the Earth and the ECEF rotate, and this can cause a range error of about 10-20 meters.
  • the satellite position at transmission time has to be represented at the ECEF frame at the reception time not the transmission time.
  • the satellite position correction is done before the integration filter and then passed to the REFERENCE NO: TPI-079PCT US PATENT APPLICATION filter, thus the measurement model uses the corrected position reported in the ECEF at reception time.
  • the details of using Ephemeris data to calculate the satellites’ positions and velocities are known, and can subsequently be followed by the correction mentioned above.
  • the equation may be expressed as follows: ⁇ m ⁇ x ⁇ x m ⁇ b ⁇ ⁇ m c r ⁇ ⁇ , where b r ⁇ c ⁇ t r is the error in range (in meters) due to receiver clock bias.
  • This equation is nonlinear.
  • the traditional techniques relying on KF used to linearize these equations about the pseudorange estimate obtained from the inertial sensors mechanization.
  • PF is suggested in this example to accommodate nonlinear models, thus there is no need for linearizing this equation.
  • [ ⁇ , ⁇ , ⁇ , ⁇ ⁇ ⁇ , ⁇ ⁇ ⁇ , ⁇ ⁇ ⁇ , ⁇ , ⁇ , ⁇ ] ⁇ , where ⁇ ⁇ is the latitude of the v ehicle, ⁇ is the longitude, ⁇ is the altitude, ⁇ ⁇ is the velocity along East direction, ⁇ ⁇ ⁇ is the velocity along North direction, ⁇ ⁇ ⁇ is the velocity along Up vertical direction, ⁇ ⁇ is the pitch angle, ⁇ ⁇ is the roll angle, and ⁇ ⁇ is the azimuth angle.
  • the techniques of this disclosure can also be used with a navigation solution that is further integrated with maps (such as street maps, indoor maps or models, or any other environment map or model in cases of applications that have such maps or models available) in addition to the different core use of map information discussed above, and a map matching or model matching routine.
  • Map matching or model matching can further enhance the navigation solution during the absolute navigational information (such as GNSS) degradation or interruption.
  • a sensor or a group of sensors that acquire information about the environment can be used such as, for example, Laser range finders, cameras and vision systems, or sonar systems.
  • the techniques of this disclosure can also be used with a navigation solution that uses various wireless communication systems that can also be used for positioning and navigation either as an additional aid (which will be more beneficial when GNSS is unavailable) or as a substitute for the GNSS information (e.g. for applications where GNSS is not applicable).
  • wireless communication systems used for positioning are, such as, those provided by cellular phone towers and signals, radio signals, digital television signals, WiFi, or Wimax.
  • the wireless communication system used for positioning may use different techniques for modeling the errors in the ranging, angles, or signal strength from wireless signals, and may use different multipath mitigation techniques. All the above mentioned ideas, among others, are also applicable in a similar manner for other REFERENCE NO: TPI-079PCT US PATENT APPLICATION wireless positioning techniques based on wireless communications systems. [00349] It is further contemplated that the techniques of this disclosure can also be used with a navigation solution that utilizes aiding information from other moving devices. This aiding information can be used as additional aid (that will be more beneficial when GNSS is unavailable) or as a substitute for the GNSS information (e.g. for applications where GNSS based positioning is not applicable).
  • aiding information from other devices may be relying on wireless communication systems between different devices.
  • the underlying idea is that the devices that have better positioning or navigation solution (for example having GNSS with good availability,accuracy or other aspects indicative of GNSS quality) can help the devices with degraded or unavailable GNSS to get an improved positioning or navigation solution. This help relies on the well-known position of the aiding device(s) and the wireless communication system for positioning the device(s) with degraded or unavailable GNSS.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)

Abstract

Techniques are disclosed to build a map for an area around a route traversed by a moving platform. An integrated navigation solution based at least in part on obtained motion sensor data for a device within the moving platform is used to build an online map during a first instance of time for the area using perception sensor data, such as, for example, radar measurements and/or optical samples. The integrated navigation solution may then be revised in a second instance of time with a nonlinear state estimation technique that uses a nonlinear measurement model for perception sensor data, wherein integrating the motion sensor data and perception sensor data in the nonlinear state estimation technique is tightly-coupled. Generating the revised integrated navigation solution in this tightly coupled technique includes using the motion sensor data with the nonlinear state estimation technique and integrating the perception sensor data directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the online map information.

Description

REFERENCE NO: TPI-079PCT US PATENT APPLICATION METHOD AND SYSTEM FOR MAP BUILDING USING PERCEPTION AND MOTION SENSORS CROSS-REFERENCE TO RELATED APPLICATIONS [001] This application claims priority from and benefit of U.S. Provisional Patent Application Serial No.63/570,752, filed March 27, 2024, and U.S. Patent Application Serial No.19/091,403, filed March 26, 2025, both of which are entitled “METHOD AND SYSTEM FOR POSITIONING USING PERCEPTION AND MOTION SENSORS,” are assigned to the assignee hereof, and are incorporated by reference in their entirety. FIELD OF THE PRESENT DISCLOSURE [002] This disclosure generally relates to providing a navigation solution for a moving platform and more specifically to such techniques that employ perception and motion sensor information. BACKGROUND [003] The field of autonomous navigation has undergone significant evolution over the past few decades, driven by advancements in sensor technology, computational power, and algorithmic sophistication. At the forefront of potential applications are autonomous road vehicles. The benefits are multifold, with significant improvements to safety being among the most important. Unlike human drivers, automated systems are not affected by fatigue or distractions, thereby having a much faster reaction time to conditions on the road which can dramatically reduce accidents and save lives. In addition, passengers would be able to enjoy more free time during their commute, during which they can work, socialize, or make use of in-vehicle entertainment systems. Vehicle ownership is expected to decrease as it is replaced with ride sharing, which will decrease traffic and congestion in urban areas. [004] In recent years, the emergence of autonomous vehicles and advanced robotics has further accelerated the demand for sophisticated navigation systems. These applications require not just pinpoint location accuracy but also the ability to understand and interact with the surrounding environment in real time. This necessity has led to the REFERENCE NO: TPI-079PCT US PATENT APPLICATION integration of diverse sensor technologies, each serving a unique function and compensating for the limitations of others. Traditional navigation systems, primarily reliant on basic Global Navigation Satellite System (GNSS, including specific implementations including without limitation the Global Positioning System - GPS, the Global Navigation Satellite System - GLONASS, Galileo and/or Beidou) technology and simple sensor arrays, have gradually given way to more complex, integrated systems. However, the modern systems are designed to provide higher accuracy, reliability, and adaptability in various environments, ranging from urban landscapes to unstructured terrains. The modern autonomous driving systems require a complete knowledge about the surrounding environment. This helps the autonomous system to understand the different aspects and details for the road. It should be capable of acquiring and processing information in the real-time domain. This includes close and nearby vehicles, traffic, road speed limits, low-speed zones, road conditions, road crossing areas, ... etc. To achieve that, the system requires advanced sensors and technology. A wide variety of applications have emerged that stand to benefit from autonomy, including trucks for transportation, ride sharing, and passenger vehicles. Transportation and delivery can benefit from convoying, lowering driver-based hours- of-service restrictions, and greater efficiency. In the case of passenger vehicles, autonomous driving can open more time for passengers to be productive during commutes, socialize, or use in-vehicle entertainment systems. Critically, self-driving technology could dramatically reduce the occurrence of automotive accidents, thereby saving lives. A fundamental problem in enabling fully autonomous platforms is the requirement for a ubiquitous, accurate, precise, and reliable navigation system. [005] There are many available sensors to aid in the positioning and navigation problem, each with their own inherent strengths and weaknesses. So, different systems and sensors are required to overcome the limitation of the standalone-based systems in such cases to achieve reliable navigation in all environments and conditions. For example, traditional systems used for navigation estimation include Inertial Measurement Units (IMU) and GNSS implementations as discussed above. IMU-based systems, such as inertial navigation systems (INS), provide relative pose estimation accurate in short times but due to the process of mechanization, standalone IMU-based systems accumulate errors in the states exponentially with time. On the other hand, the position estimated by the GNSS receiver is absolute and does not drift over time. REFERENCE NO: TPI-079PCT US PATENT APPLICATION However, GNSS signals are sometimes completely blocked or affected by severe multipath. Due to the complementary error characteristics of motion sensors and GNSS systems, a traditional approach to accurately estimate the pose of the vehicle is by using sensor fusion algorithms to integrate IMU and GNSS signals. The performance of the GNSS system can also be enhanced by using Differential GNSS stations that can broadcast ionospheric and tropospheric errors to adjacent GNSS receivers. [006] Additional sensors and systems may help overcome limitations noted above of the standalone-based systems in such cases to achieve reliable navigation in all environments and conditions. Notably, perception sensors can provide rich information when paired with maps, including high definition (HD) maps. The fusion of such sensors with INS can allow for position updates from object detection and map matching. Some of the most common sensors are radars, optical cameras, and light detection and ranging (lidar), but infrared (IR) cameras, ultrasonic detectors and others may also be employed. As one illustration, a perception sensor commonly found in cars today is radar. Key benefits to radar include being robust to adverse weather conditions, insensitive to lighting variations, and providing long and accurate range measurements. They may also be packaged behind optically opaque vehicle paneling, thereby offering industrial designers a degree of flexibility that is not possible with other perception sensors. Although automotive imaging radars have a lower resolution than lidar, recent advances are narrowing the gap. The current generation of state-of- the-art automotive imaging radars provide high-rate information on multiple dynamic targets in an extremely cluttered scene in a 4D domain, consisting of range, doppler, azimuth, and elevation measurements. Despite these advantages, radar still presents a challenge for HD map matching localization techniques because of the sparseness of the data and a lower angular resolution than lidar. Furthermore, the integration of a multi- radar configuration effectively expands the radar field of view, providing a wider or even up to 360-degree horizontal coverage while maintaining a high scan rate. The increased coverage and more numerous detections can aid the HD map matching process achieve a better result. Such a system is shown to be effective in enabling an accurate and reliable navigation system for both vehicle and robotic platforms in GNSS degraded or denied environments. Beyond its use for localization, a multi-radar configuration can be an effective tool for imaging a scene, particularly if recent sensors are used with higher angular and range resolutions. REFERENCE NO: TPI-079PCT US PATENT APPLICATION [007] Another perception sensor is vision which mainly consists of an array of cameras or other suitable optical sensor and image processing algorithms, characterized by operating substantially within the visible wavelength spectrum. It enables the system to perceive and interpret visual data, facilitating complex tasks such as object recognition, terrain analysis, and even decision-making based on visual inputs. The integration of vision systems, particularly the use of cameras, in navigation systems marks a significant technological evolution in the realm of autonomous and assisted navigation. These systems employ cameras along with advanced image processing algorithms to accurately perceive and interpret the environment, a development that is fundamental in enhancing the capabilities of various automated systems. The data from vision sensors is crucial for numerous navigational tasks, including detecting obstacles, maintaining lane discipline, and recognizing traffic signs. Cameras in these vehicles are designed to capture a wide field of view and often work in tandem with other sensors like lidar, a perception sensor, and radar to create a comprehensive understanding of the surrounding environment. The visual information gathered enhances not only the safety of autonomous vehicles but also contributes to a smoother and more efficient driving experience. Despite the transformative potential of vision systems in navigation, several challenges persist including dealing with variable lighting conditions, which can affect the accuracy of camera-based systems, and the complexity of processing and interpreting dynamic, unstructured environments. [008] Imaging radars stand out for their robust performance in adverse weather conditions. Unlike cameras, they are not hindered by fog, rain, or snow, making them reliable in a wide range of environmental settings. Their ability to detect objects at long ranges is another significant advantage, particularly beneficial in early warning systems and long-distance navigation tasks. Furthermore, the penetrative capability of radar waves allows them to detect objects that are not visually apparent, such as obstacles hidden by foliage or thin walls. However, imaging radars do have drawbacks. They generally offer lower spatial resolution compared to cameras, making it challenging to identify small or detailed features. [009] On the other hand, cameras provide high-resolution imagery that is more intuitive to interpret, making them ideal for applications requiring detailed visual information. They excel in tasks such as facial recognition, reading signs, and detailed REFERENCE NO: TPI-079PCT US PATENT APPLICATION environmental mapping. Cameras are also generally smaller and consume less power than radar systems, making them suitable for use in smaller devices and platforms. However, cameras have their limitations as their effectiveness is reduced in poor weather conditions and low-light scenarios, which can be a major drawback for outdoor navigation systems. Cameras also have a limited range compared to radars, restricting their use in long-range detection scenarios. [0010] Imaging radars and cameras, both integral in modern navigation and sensing systems, have distinct characteristics that define their advantages and limitations. Understanding these differences is key in determining how they can complement each other in various applications. [0011] In applications that demand comprehensive environmental awareness and navigation, the combination of imaging radars and cameras can be highly effective. While radars provide reliable long-range detection and perform well in adverse weather conditions, cameras offer high-resolution imagery for detailed environmental analysis. By integrating these technologies, systems can leverage the strengths of both: radars can be used for initial detection and rough estimation of an object's location and movement, while cameras can provide detailed visual information for closer inspection and identification. This complementary use allows for more robust and versatile navigation and sensing solutions, applicable in a variety of fields including autonomous vehicles, aerial surveillance, and maritime navigation. [0012] HD maps, including different formats like occupancy grid maps and point clouds, are used as one of the main sources to enable the solution from different perception sensors such as lidar, camera, or radar. Conventionally, the navigation system can use 2D/3D perception-based maps generated through crowdsourcing techniques for mapped areas from data collected over time using an integrated system supplied with perception sensors. This map can then be used in subsequent runs as a global reference map for localization purposes. [0013] However, despite these advancements, current navigation systems often face challenges in scenarios where GNSS signals are weak or obstructed in complex urban environments with numerous dynamic obstacles, and in conditions requiring high-level decision-making based on limited data. These challenges underscore the need for more REFERENCE NO: TPI-079PCT US PATENT APPLICATION integrated, intelligent, and versatile navigation systems. Further, there is a need for a technique to give positioning information using information acquired during a current navigation session, such as through use of a map generated in run to reduce the need for other maps that need to be built offline or with measurements from different platforms. According to the techniques of this disclosure, measurements from perception sensors may be used with information from motion sensors to help build a map online during a given navigation session to aid positioning as described in the following materials. SUMMARY [0014] This disclosure includes a method for providing an integrated navigation solution in real-time for a device within a moving platform. The method may involve obtaining motion sensor data from a sensor assembly of the device and obtaining perception sensor data from at least one perception sensor for the platform. An integrated navigation solution for the platform may be generated based at least in part on the obtained motion sensor data. An online map for an area encompassing the platform in a first instance of time may be built using perception sensor data based at least in part on the integrated navigation solution during the first instance of time. The integrated navigation solution may then be revised in a second instance of time based at least in part on the motion sensor data using a nonlinear state estimation technique. The nonlinear state estimation technique may use a prediction phase involving a system model to propagate predictions about a state of the platform and an update phase involving at least one measurement model relating measurements to the state is used to update the state of the platform, wherein the nonlinear state estimation technique comprises using a nonlinear measurement model for perception sensor data, such that integrating the motion sensor data and perception sensor data in the nonlinear state estimation technique is tightly-coupled. Revising the integrated navigation solution may involve using the obtained motion sensor data in the nonlinear state estimation technique and integrating perception sensor data directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the online map information. The revised integrated navigation solution may then be provided when available and the integrated navigation solution may be provided when the revised integrated navigation solution is not available. REFERENCE NO: TPI-079PCT US PATENT APPLICATION [0015] This disclosure also includes a system for providing an integrated navigation solution in real-time for a device within a moving platform. The system may include a device having a sensor assembly configured to output motion sensor data, at least one perception sensor providing perception sensor data and at least one processor, coupled to receive the motion sensor data and the perception sensor data. The at least one processor may be operative to generate an integrated navigation solution for the platform, build an online map for an area encompassing the platform in a first instance of time using perception sensor data based at least in part on the integrated navigation solution during the first instance of time, revise the integrated navigation solution in a second instance of time based at least in part on the motion sensor data using a nonlinear state estimation technique, wherein a prediction phase involving a system model is used to propagate predictions about a state of the platform and an update phase involving at least one measurement model relating measurements to the state is used to update the state of the platform, wherein the nonlinear state estimation technique comprises using a nonlinear measurement model for perception sensor data, wherein integrating the motion sensor data and perception sensor data in the nonlinear state estimation technique is tightly-coupled, and wherein the revising comprises: i) using the received motion sensor data in the nonlinear state estimation technique; and ii) integrating perception sensor data directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the online map information; and provide the revised integrated navigation solution when available and the integrated navigation solution when the revised integrated solution is not available. BRIEF DESCRIPTION OF THE DRAWINGS [0016] FIG.1 is schematic diagram of a device for providing a navigation solution by integrating perception sensor data with motion sensor data according to an embodiment. [0017] FIG.2 is schematic diagram of another device architecture for providing a navigation solution by integrating perception sensor data with motion sensor data according to an embodiment. [0018] FIG.3 is a flowchart showing a routine for providing an integrated REFERENCE NO: TPI-079PCT US PATENT APPLICATION navigation solution with perception sensor data and motion sensor data according to an embodiment. [0019] FIG.4 is a flowchart showing further details of the routine of FIG.3 for providing an integrated navigation solution with perception sensor data and motion sensor data according to an embodiment. [0020] FIG.5 is a schematic overview of a system architecture for using perception sensor data and motion sensor data according to an embodiment. [0021] FIG.6 is a schematic representation of an exemplary system architecture for providing an integrated navigation solution with perception sensor data and motion sensor data using an online map according to an embodiment. [0022] FIG.7 is a flowchart showing a routine for assessing absolute navigational information when building an online map according to an embodiment. [0023] FIG.8 is a flowchart showing a routine for retrieving and storing an online map according to an embodiment. [0024] FIG.9 is a schematic representation of an exemplary system architecture for providing an integrated navigation solution with perception sensor data and motion sensor data using map retrieval according to an embodiment. [0025] FIG.10 is a schematic representation of an exemplary system architecture for determining depth from optical samples according to an embodiment. [0026] FIGs.11 and 12 are graphic representations of scene reconstruction according to an embodiment. [0027] FIG.13 is a schematic representation of a convolutional neural network suitable for estimating depth with optical samples according to an embodiment. [0028] FIG.14 is a schematic representation of a recurrent neural network suitable for estimating depth with optical samples according to an embodiment. [0029] FIG.15 is a schematic representation of use of a perception range-based measurement model according to an embodiment. REFERENCE NO: TPI-079PCT US PATENT APPLICATION [0030] FIG.16 is a schematic representation of an adaptive measurement model according to an embodiment. [0031] FIG.17 is a schematic representation of use of an perception nearest object likelihood measurement model according to an embodiment. [0032] FIG.18 is a schematic representation of use of a perception map matching measurement model according to an embodiment. [0033] FIG.19 is a schematic representation of a closed-form measurement model according to an embodiment. [0034] FIG.20 is a schematic representation of a closed-form measurement model employing two types of perception sensor data according to an embodiment. [0035] FIG.21 is a schematic diagram showing an exemplary system model that receives input from an additional state estimation technique that integrates the motion sensor data according to an embodiment. [0036] FIG.22 is a schematic representation of relative pose estimation using perception sensor data according to an embodiment. DETAILED DESCRIPTION [0037] At the outset, it is to be understood that this disclosure is not limited to particularly exemplified materials, architectures, routines, methods or structures as such may vary. Thus, although a number of such options, similar or equivalent to those described herein, can be used in the practice or embodiments of this disclosure, the preferred materials and methods are described herein. [0038] It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments of this disclosure only and is not intended to be limiting. [0039] The detailed description set forth below in connection with the appended drawings is intended as a description of exemplary embodiments of the present disclosure and is not intended to represent the only exemplary embodiments in which REFERENCE NO: TPI-079PCT US PATENT APPLICATION the present disclosure can be practiced. The term “exemplary” used throughout this description means “serving as an example, instance, or illustration,” and should not necessarily be construed as preferred or advantageous over other exemplary embodiments. The detailed description includes specific details for the purpose of providing a thorough understanding of the exemplary embodiments of the specification. It will be apparent to those skilled in the art that the exemplary embodiments of the specification may be practiced without these specific details. In some instances, well known structures and devices are shown in block diagram form in order to avoid obscuring the novelty of the exemplary embodiments presented herein. [0040] For purposes of convenience and clarity only, directional terms, such as top, bottom, left, right, up, down, over, above, below, beneath, rear, back, and front, may be used with respect to the accompanying drawings or chip embodiments. These and similar directional terms should not be construed to limit the scope of the disclosure in any manner. [0041] In this specification and in the claims, it will be understood that when an element is referred to as being “connected to” or “coupled to” another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected to” or “directly coupled to” another element, there are no intervening elements present. [0042] Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self- consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. [0043] It should be borne in mind, however, that all of these and similar terms are to REFERENCE NO: TPI-079PCT US PATENT APPLICATION be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present application, discussions utilizing the terms such as “accessing,” “receiving,” “sending,” “using,” “selecting,” “determining,” “normalizing,” “multiplying,” “averaging,” “monitoring,” “comparing,” “applying,” “updating,” “measuring,” “deriving” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. [0044] Embodiments described herein may be discussed in the general context of processor-executable instructions residing on some form of non-transitory processor- readable medium, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments. [0045] In the figures, a single block may be described as performing a function or functions; however, in actual practice, the function or functions performed by that block may be performed in a single component or across multiple components, and/or may be performed using hardware, using software, or using a combination of hardware and software. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Also, the exemplary wireless communications devices may include components other than those shown, including well-known components such as a processor, memory and the like. REFERENCE NO: TPI-079PCT US PATENT APPLICATION [0046] The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof, unless specifically described as being implemented in a specific manner. Any features described as modules or components may also be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a non-transitory processor-readable storage medium comprising instructions that, when executed, performs one or more of the methods described above. The non-transitory processor-readable data storage medium may form part of a computer program product, which may include packaging materials. [0047] The non-transitory processor-readable storage medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, other known storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a processor-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer or other processor. For example, a carrier wave may be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. [0048] The various illustrative logical blocks, modules, circuits and instructions described in connection with the embodiments disclosed herein may be executed by one or more processors, such as one or more motion processing units (MPUs), digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), application specific instruction set processors (ASIPs), field programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. The term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured as described herein. REFERENCE NO: TPI-079PCT US PATENT APPLICATION Also, the techniques could be fully implemented in one or more circuits or logic elements. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of an MPU and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with an MPU core, or any other such configuration. [0049] Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one having ordinary skill in the art to which the disclosure pertains. [0050] Finally, as used in this specification and the appended claims, the singular forms “a, “an” and “the” include plural referents unless the content clearly dictates otherwise. [0051] As noted above, positioning techniques such as those of this disclosure may be used in conjunction with map information, such as to improve both the reliability and positioning accuracy of the navigation system. Examples of algorithms using map information to aid a navigation system include geometric, topological, probabilistic and other advanced techniques. The techniques of this disclosure are directed to providing a navigation solution for a device within a moving platform by employing perception sensor data with motion sensor data. For example, perception sensor data can be directly integrated with motion sensor data when generating a navigation solution for a moving platform. Suitable illustrations of these techniques may be found in commonly- owned patents, U. S. Patent 11,422,253, which involves the use of radar measurements, and U.S. Patent No. 11,875,519, which involves the use of optical samples, both of which are incorporated by reference in their entirety. However, these and other techniques that employ data from perceptions sensors typically rely on the availability of pre-built maps for the area encompassing the platform. As will be appreciated, such pre-built map information may not exist for certain areas and, even when available, obtaining the pre-built map information requires time and communication bandwidth. Accordingly, the techniques of this disclosure are directed building an online map during a current run of a navigation session using at least one type of perception sensor and then utilizing the online map to directly integrate perception sensor data of either REFERENCE NO: TPI-079PCT US PATENT APPLICATION the same or different type with motion sensor data to generate an integrated navigation solution. For example, in some embodiments, the online map is built with one type of perception sensor data and the same type of perception sensor data is directly integrated with the motion sensor data with the online map. Alternatively, the online map is built with one type of perception sensor data and another type of perception sensor data is directly integrated with the motion sensor data with the online map. Still further, multiple types of perception sensor data may be used for either or both stages of building the online map and then utilizing the online map when directly integrating the motion sensor data. Without limitation, one possible embodiment is building the online map with radar measurements and then directly integrating optical samples with the motion sensor data. Conversely, another possible embodiment is building the online map with optical samples and then directly integrating radar measurements with the motion sensor data. Again, other types of perception sensors such as lidar, IR cameras, or ultrasonic sensors may be used in addition or in the alternative for either stage. As will be appreciated, it may be desirable to build the online map using one type of perception sensor based on a characteristic such as range and then integrate another type of perception sensor data with the motion sensor data based on a different characteristic, such as resolution. [0052] Typically, the platform is a wheel-based vehicle or other similar vessel intended for use on land, but may also be marine or airborne. As such, the platform may also be referred to as the vehicle. However, the platform may also be a pedestrian or a user undergoing on foot motion. As will be appreciated, motion sensor data includes information from accelerometers, gyroscopes, or an IMU. Inertial sensors are self-contained sensors that use gyroscopes to measure the rate of rotation/angle, and accelerometers to measure the specific force (from which acceleration is obtained). Inertial sensors data may be used in an INS, which is a non-reference based relative positioning system. Using initial estimates of position, velocity and orientation angles of the moving platform as a starting point, the INS readings can subsequently be integrated over time and used to determine the current position, velocity and orientation angles of the platform. Typically, measurements are integrated once for gyroscopes to yield orientation angles and twice for accelerometers to yield position of the platform incorporating the orientation angles. Thus, the measurements of gyroscopes will undergo a triple integration operation during the process of yielding position. Inertial REFERENCE NO: TPI-079PCT US PATENT APPLICATION sensors alone, however, are unsuitable for accurate positioning because the required integration operations of data results in positioning solutions that drift with time, thereby leading to an unbounded accumulation of errors. Integrating absolute navigational information, such as from GNSS, with the motion sensor data can help mitigate such errors. [0053] The device that is contained within the platform (which as noted may be a vehicle or vessel of any type, and in some applications, may be a person,) and may have one or more sources of navigational or positional information. In some embodiments, the device is strapped or tethered in a fixed orientation with respect to the platform. The device is “strapped,” “strapped down,” or “tethered” to the platform when it is physically connected to the platform in a fixed manner that does not change with time during navigation, in the case of strapped devices, the relative position and orientation between the device and platform does not change with time during navigation. Notably, in strapped configurations, it is assumed that the mounting of the device to the platform is in a known orientation. Nevertheless, in some circumstances, there may be a deviation in the intended mounting orientation, leading to a misalignment between the device and the platform. In one aspect, the techniques of this disclosure may be employed to characterize and correct for such a mounting misalignment. [0054] In other embodiments, the device is “non-strapped”, or “non-tethered” when the device has some mobility relative to the platform (or within the platform), meaning that the relative position or relative orientation between the device and platform may change with time during navigation. Under these conditions, the relative orientation of the device with respect to the platform may vary, which may also be termed misalignment. As with the mounting misalignment discussed above, this varying misalignment between the frame of the device and the frame of the platform may be determined using the techniques of this disclosure and correspondingly compensated for. The device may be “non-strapped” in two scenarios: where the mobility of the device within the platform is “unconstrained”, or where the mobility of the device within the platform is “constrained.” One example of “unconstrained” mobility may be a person moving on foot and having a portable device such as a smartphone in the their hand for texting or viewing purposes (hand may also move), at their ear, in hand and dangling/swinging, in a belt clip, in a pocket, among others, where such use cases can REFERENCE NO: TPI-079PCT US PATENT APPLICATION change with time and even each use case can have a changing orientation with respect to the user. Another example where the mobility of the device within the platform is “unconstrained” is a person in a vessel or vehicle, where the person has a portable device such as a smartphone in the their hand for texting or viewing purposes (hand may also move), at their ear, in a belt clip, in a pocket, among others, where such use cases can change with time and even each use case can have a changing orientation with respect to the user. An example of “constrained” mobility may be when the user enters a vehicle and puts the portable device (such as smartphone) in a rotation-capable holder or cradle. In this example, the user may rotate the holder or cradle at any time during navigation and thus may change the orientation of the device with respect to the platform or vehicle. Thus, when non-strapped, the mobility of the device may be constrained or unconstrained within the platform and may be moved or tilted to any orientation within the platform and the techniques of this disclosure may still be applied under all of these conditions. As such, some embodiments described below include a portable, hand-held device that can be moved in space by a user and its motion, location and/or orientation in space therefore sensed. The techniques of this disclosure can work with any type of portable device as desired, including a smartphone or the other exemplary devices noted below. It will be appreciated that such devices are often carried or associated with a user and thus may benefit from providing navigation solutions using a variety of inputs. For example, such a handheld device may be a mobile phone (e.g., cellular phone, a phone running on a local network, or any other telephone handset), tablet, personal digital assistant (PDA), video game player, video game controller, navigation device, wearable device (e.g., glasses, watch, belt clip), fitness tracker, virtual or augmented reality equipment, mobile internet device (MID), personal navigation device (PND), digital still camera, digital video camera, binoculars, telephoto lens, portable music, video or media player, remote control, or other handheld device, or a combination of one or more of these devices. However, the techniques of this disclosure may also be applied to other types of devices that are not handheld, including devices integrated with autonomous or piloted vehicles whether land-based, aerial, or underwater vehicles, or equipment that may be used with such vehicles. As an illustration only and without limitation, the platform may be a drone, also known as an unmanned aerial vehicle (UAV). [0055] To help illustrate aspects of this disclosure, features of a suitable device 100 REFERENCE NO: TPI-079PCT US PATENT APPLICATION are depicted in FIG.1 with high level schematic blocks. As will be appreciated, device 100 may be implemented as a device or apparatus, such a strapped, non-strapped, tethered, or non-tethered device as described above, which when non-strapped, the mobility of the device may be constrained or unconstrained within the platform and may be moved or tilted to any orientation within the platform. As shown, device 100 includes a processor 102, which may be one or more microprocessors, central processing units (CPUs), or other processors to run software programs, which may be stored in memory 104, associated with the functions of device 100. Multiple layers of software can be provided in memory 104, which may be any combination of computer readable medium such as electronic memory or other storage medium such as hard disk, optical disk, etc., for use with the processor 102. For example, an operating system layer can be provided for device 100 to control and manage system resources in real time, enable functions of application software and other layers, and interface application programs with other software and functions of device 100. Similarly, different software application programs such as menu navigation software, games, camera function control, navigation software, communications software, such as telephony or wireless local area network (WLAN) software, or any of a wide variety of other software and functional interfaces can be provided. In some embodiments, multiple different applications can be provided on a single device 100, and in some of those embodiments, multiple applications can run simultaneously. [0056] Device 100 includes at least one sensor assembly 106 for providing motion sensor data representing motion of device 100 in space, including inertial sensors such as an accelerometer and a gyroscope, other motion sensors including a magnetometer, a pressure sensor or others may be used in addition. Motion sensors represent a self- contained source of navigational information. Depending on the configuration, sensor assembly 106 measures one or more axes of rotation and/or one or more axes of acceleration of the device. In one embodiment, sensor assembly 106 may include inertial rotational motion sensors or inertial linear motion sensors. For example, the rotational motion sensors may be gyroscopes to measure angular velocity along one or more orthogonal axes and the linear motion sensors may be accelerometers to measure linear acceleration along one or more orthogonal axes. In one aspect, three gyroscopes and three accelerometers may be employed, such that a sensor fusion operation performed by processor 102, or other processing resources of device 100, combines data REFERENCE NO: TPI-079PCT US PATENT APPLICATION from sensor assembly 106 to provide a six axis determination of motion or six degrees of freedom (6DOF). Still further, sensor assembly 106 may include a magnetometer measuring along three orthogonal axes and output data to be fused with the gyroscope and accelerometer inertial sensor data to provide a nine axis determination of motion. Likewise, sensor assembly 106 may also include a pressure sensor to provide an altitude determination that may be fused with the other sensor data to provide a ten axis determination of motion.. As desired, sensor assembly 106 may be implemented using Micro Electro Mechanical System (MEMS), allowing integration into a single small package. [0057] Device 100 also implements at least one perception sensor providing perception sensor data 112, including one or more of an optical camera, a thermal camera, an infra-red imaging sensor, a light detection and ranging (LiDAR or lidar) system, a radar system, an ultrasonic sensor or other suitable sensor that records images or samples to help classify objects detected in the surrounding environment. As will be discussed in detail below, the techniques of this disclosure involve integrating perception sensor data 112 with the motion sensor data provided by sensor assembly 106 (or other sensors, such as external sensor 108) to provide the navigation solution. Device 100 obtains perception sensor data 112 from any perception sensor such as those indicated above, which may be integrated with device 100, may be associated or connected with device 100, may be part of the platform or may be implemented in any other desired manner. [0058] Still further, device 100 may also employ external sensor 108. As used herein, “external” means a sensor that is not integrated with sensor assembly 106 and may be remote or local to device 100. Also alternatively or in addition, sensor assembly 106 and/or external sensor 108 may be configured to measure one or more other aspects about the environment surrounding device 100. This is optional and not required in all embodiments. For example, a pressure sensor and/or a magnetometer may be used to refine motion determinations. Although described in the context of one or more sensors being MEMS based, the techniques of this disclosure may be applied to any sensor design or implementation. [0059] In the embodiment shown, processor 102, memory 104, sensor assembly 106, and other components of device 100 may be coupled through bus 110, which may REFERENCE NO: TPI-079PCT US PATENT APPLICATION be any suitable bus or interface, such as a peripheral component interconnect express (PCIe) bus, a universal serial bus (USB), a universal asynchronous receiver/transmitter (UART) serial bus, a suitable advanced microcontroller bus architecture (AMBA) interface, an Inter-Integrated Circuit (I2C) bus, a serial digital input output (SDIO) bus, a serial peripheral interface (SPI) or other equivalent. Depending on the architecture, different bus configurations may be employed as desired. For example, additional buses may be used to couple the various components of device 100, such as by using a dedicated bus between processor 102 and memory 104. [0060] Algorithms, routines or other instructions for processing sensor data, may be employed by integration module 114 to perform this any of the operations associated with the techniques of this disclosure. In one aspect, an integrated navigation solution based on the motion sensor data and absolute navigational information may be output by integration module 114. As used herein, a navigation solution comprises at least position and may also include attitude (or orientation) and/or velocity. Determining the navigation solution may involve sensor fusion or similar operations performed by the processor 102, which may be using the memory 104, or any combination of other processing resources. [0061] Correspondingly, device 100 also has a source of absolute navigational information 116, such as a Global Navigation Satellite System (GNSS) receiver, including without limitation the Global Positioning System (GPS), the Global Navigation Satellite System (GLONASS), Galileo and/or Beidou, as well as WiFiTM positioning, cellular tower positioning, BluetoothTM positioning beacons or other similar methods when deriving a navigation solution. Integration module 114 may also be configured to use information from a wireless communication protocol to provide a navigation solution determination using signal trilateration. Any suitable protocol, including cellular-based and wireless local area network (WLAN) technologies such as Universal Terrestrial Radio Access (UTRA), Code Division Multiple Access (CDMA) networks, Global System for Mobile Communications (GSM), the Institute of Electrical and Electronics Engineers (IEEE) 802.16 (WiMAX), Long Term Evolution (LTE), IEEE 802.11 (WiFiTM) and others may be employed. The source of absolute navigational information represents a “reference-based” system that depend upon external sources of information, as opposed to self-contained navigational information REFERENCE NO: TPI-079PCT US PATENT APPLICATION that is provided by self-contained and/or “non-reference based” systems within a device/platform, such as sensor assembly 106 as noted above. [0062] In some embodiments, device 100 may include communications module 118 for any suitable purpose, including for transmitting map building derived as the platform traverses an area. Communications module 118 may employ a Wireless Local Area Network (WLAN) conforming to Institute for Electrical and Electronic Engineers (IEEE) 802.11 protocols, featuring multiple transmit and receive chains to provide increased bandwidth and achieve greater throughput. For example, the 802.11ad (WiGIGTM) standard includes the capability for devices to communicate in the 60 GHz frequency band over four, 2.16 GHz-wide channels, delivering data rates of up to 7 Gbps. Other standards may also involve the use of multiple channels operating in other frequency bands, such as the 5 GHz band, or other systems including cellular-based and WLAN technologies such as Universal Terrestrial Radio Access (UTRA), Code Division Multiple Access (CDMA) networks, Global System for Mobile Communications (GSM), IEEE 802.16 (WiMAX), Long Term Evolution (LTE), other transmission control protocol, internet protocol (TCP/IP) packet-based communications, or the like may be used. In some embodiments, multiple communication systems may be employed to leverage different capabilities. Typically, communications involving higher bandwidths may be associated with greater power consumption, such that other channels may utilize a lower power communication protocol such as BLUETOOTH®, ZigBee®, ANT or the like. Further, a wired connection may also be employed. Generally, communication may be direct or indirect, such as through one or multiple interconnected networks. As will be appreciated, a variety of systems, components, and network configurations, topologies and infrastructures, such as client/server, peer-to- peer, or hybrid architectures, may be employed to support distributed computing environments. For example, computing systems can be connected together by wired or wireless systems, by local networks or widely distributed networks. Currently, many networks are coupled to the Internet, which provides an infrastructure for widely distributed computing and encompasses many different networks, though any network infrastructure can be used for exemplary communications made incident to the techniques as described in various embodiments. [0063] As will be appreciated, processor 102 and/or other processing resources of REFERENCE NO: TPI-079PCT US PATENT APPLICATION device 100 may be one or more microprocessors, central processing units (CPUs), or other processors which run software programs for device 100 or for other applications related to the functionality of device 100. For example, different software application programs such as menu navigation software, games, camera function control, navigation software, and phone or a wide variety of other software and functional interfaces can be provided. In some embodiments, multiple different applications can be provided on a single device 100, and in some of those embodiments, multiple applications can run simultaneously on the device 100. Multiple layers of software can be provided on a computer readable medium such as electronic memory or other storage medium such as hard disk, optical disk, flash drive, etc., for use with processor 102. For example, an operating system layer can be provided for device 100 to control and manage system resources in real time, enable functions of application software and other layers, and interface application programs with other software and functions of device 100. In some embodiments, one or more motion algorithm layers may provide motion algorithms for lower-level processing of raw sensor data provided from internal or external sensors. Further, a sensor device driver layer may provide a software interface to the hardware sensors of device 100. Some or all of these layers can be provided in memory 104 for access by processor 102 or in any other suitable architecture. Embodiments of this disclosure may feature any desired division of processing between processor 102and other processing resources, as appropriate for the applications and/or hardware being employed. Aspects implemented in software may include but are not limited to, application software, firmware, resident software, microcode, etc, and may take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system, such as processor 102, a dedicated processor or any other processing resources of device 100. [0064] As another illustration of aspects of this disclosure, features of a different device architecture are depicted in FIG. 2 with high level schematic blocks in the context of device 200. Here, device 200 includes a host processor 202 and memory 204 similar to the above embodiment. Device 200 includes at least one sensor assembly for providing motion sensor data, as shown here in the form of integrated motion processing unit (MPU®) 206 or any other sensor processing unit (SPU), featuring sensor processor 208, memory 210 and internal sensor 212. Memory 210 may store REFERENCE NO: TPI-079PCT US PATENT APPLICATION algorithms, routines or other instructions for processing data output by internal sensor 212 and/or other sensors as described below using logic or controllers of sensor processor 208, as well as storing raw data and/or motion data output by internal sensor 212 or other sensors. Memory 210 may also be used for any of the functions associated with memory 204. Internal sensor 212 may be one or more sensors for measuring motion of device 200 in space as described above, including inertial sensors such as an accelerometer and a gyroscope, other motion sensors including a magnetometer, a pressure sensor or others may be used in addition. Exemplary details regarding suitable configurations of host processor 202 and MPU 206 may be found in, commonly owned U.S. Patent Nos.8,250,921, issued August 28, 2012, and 8,952,832, issued February 10, 2015, which are hereby incorporated by reference in their entirety. Suitable implementations for MPU 206 in device 200 are available from InvenSense, Inc. of San Jose, Calif. [0065] Optionally, another sensor assembly in the form of external sensor 214, and may represent sensors, such as inertial motion sensors (i.e., accelerometer and/or a gyroscope), other motion sensors or other types of sensors as described above. In this context, “external” means a sensor that is not integrated with MPU 206 and may be remote or local to device 200. Also alternatively or in addition, MPU 206 may receive data from an auxiliary sensor 216 configured to measure one or more aspects about the environment surrounding device 200. This is optional and not required in all embodiments. For example, a pressure sensor and/or a magnetometer may be used to refine motion determinations made using internal sensor 212. In the embodiment shown, host processor 202, memory 204, MPU 206 and other components of device 200 may be coupled through bus 218, while sensor processor 208, memory 210, internal sensor 212 and/or auxiliary sensor 216 may be coupled though bus 220, either of which may be any suitable bus or interface as described above. [0066] Device 200 also implements at least one perception sensor, which may include one or more of the sensors discussed above, that provides perception sensor data 222. As discussed above, the techniques of this disclosure involve integrating perception sensor data 22 with the motion sensor data provided by internal sensor 212 (or other sensors) using an online map that is built in run to provide the navigation solution for device 200. Again as noted above, the perception sensor data 222 may be REFERENCE NO: TPI-079PCT US PATENT APPLICATION obtained from any suitable sensor associated with device 200, including from sensor(s) that may be part of the platform or may be implemented in any other desired manner. Also similar to the above embodiment, algorithms, routines or other instructions for processing sensor data, including integrating perception sensor data 222, may be employed by integration module 224 to perform this any of the operations associated with the techniques of this disclosure. Determining the navigation solution may involve sensor fusion or similar operations performed by MPU processor 208. In other embodiments, some, or all, of the processing and calculation may be performed by the host processor 202, which may be using the host memory 204, or any combination of other processing resources. [0067] As such, device 200 may have a source of absolute navigational information 226 and may include communications module 228 for any suitable purpose. Source of absolute navigational information 226 and/or communications module 228 may have any of the characteristics discussed above with regard to source of absolute navigational information 116 and communications module 118. [0068] As with processor 102, host processor 202 and/or sensor processor 208 may be one or more microprocessors, central processing units (CPUs), or other processors which run software programs for device 200 or for other applications related to the functionality of device 200. Embodiments of this disclosure may feature any desired division of processing between host processor 202, MPU 206 and other processing resources, as appropriate for the applications and/or hardware being employed. [0069] A state estimation technique, such as a filter, includes a prediction phase and an update phase (which may also be termed a measurement update phase) may be used to when generating the integrated and revised integrated navigation solutions, which as noted includes at least position and may also include attitude (or orientation) and/or velocity. A state estimation technique also uses a system model and measurement model(s) based on what measurements are used. The system model is used in the prediction phase, and the measurement model(s) is/are used in the update phase. As such, the state estimation techniques of this disclosure use a measurement model for the perception sensor data so that the obtained perception sensor data directly update the state estimation technique. Further, according to this disclosure, the state estimation technique is nonlinear. The nonlinear models do not suffer from approximation or REFERENCE NO: TPI-079PCT US PATENT APPLICATION linearization and can enhance the navigation solution of the device when using very low-cost low-end inertial sensors. The optical measurement model(s) is/are nonlinear. The system models can be linear or nonlinear. The system model may be a linear or nonlinear error-sate system model. The system model may be a total-state system model, in most cases total-state system models are nonlinear. In the total-state approach, the state estimation or filtering technique is estimating the state of the device itself (such as position, velocity, and attitude of the device), the system model or the state transition model used is the motion model itself, which in case of inertial navigation is a nonlinear model, this model is a total-state model since the estimated state is the state of the navigation device itself. In the error-state approach, the motion model is used externally in what is called inertial mechanization, which is a nonlinear model as mentioned earlier, the output of this model is the navigation states of the module, such as position, velocity, and attitude. The state estimation or filtering technique estimates the errors in the navigation states obtained by the mechanization, so the estimated state vector by this state estimation or filtering technique is for the error states, and the system model is an error-state system model which transitions the previous error-state to the current error- state. The mechanization output is corrected for these estimated errors to provide the corrected navigation states, such as corrected position, velocity and attitude. The estimated error-state is about a nominal value which is the mechanization output, the mechanization can operate either unaided in an open loop mode, or can receive feedback from the corrected states, this case is called closed-loop mode. Conventional, linear state estimation techniques, such as a Kalman filter (KF) or an extended Kalman filter (EKF) require linearized approximations. By avoiding the need for approximation by linearization, the nonlinear techniques of this disclosure can provide a more accurate navigation solution for the device. [0070] Different architectures may be employed for the integration of data using state estimation techniques. Loosely-coupled integration uses an estimation technique to integrate motion sensors (inertial sensors) data and other source of information in the position domain. So, each source of information has to have its own separate technique to estimate position for this source alone, then all positions from all separate techniques are combined together in the loosely-couple integration. Tightly coupled integration uses an estimation technique to integrate motion sensor (inertial sensors) readings with raw measurements from another source of information using a single master filter. The REFERENCE NO: TPI-079PCT US PATENT APPLICATION raw measurements from this other source of information are integrated directly using this same single filter using appropriate measurement model for these raw measurement without any need to estimate position from them first. [0071] As will be described, the techniques of this disclosure build an online map for the environment surrounding the platform and device using perception sensor data 222. In one aspect, sensors such as those integrated with the device within the platform continuously log real-time measurements to derive positions that may be correlated with perception sensor data 222 as part of the map building operation. As used herein, a map is a representation of all the elements/objects in the real-world that might affect the measurements logged by the sensors. A map can be either static or dynamic depending on whether static only or static and dynamic objects are represented in the map. The online map may be built during a first instance of time so that it may then be used to revise the integrated navigation solution at a second instance of time as described herein. For the purposes of this disclosure, an instance of time may be an instant, an epoch, or other relatively short period of time involving the sampling of perception sensor and motion sensor data and generating a navigation solution for that instance. [0072] One type of perception sensor suitable for use with the techniques of this disclosure provides radar measurements. Notably, imaging radar can play a pivotal role in the proposed navigation system. In areas where GNSS signals are weak or obstructed, such as urban canyons or under dense foliage, radar data can supplement positioning information. By mapping the surrounding environment, the radar helps in maintaining positional accuracy even when GNSS data is compromised. The integration of radar allows for a more comprehensive understanding of the vehicle's spatial orientation and movement. Utilizing the radar's environmental mapping capabilities, the system can identify fixed landmarks or features, assisting in triangulating the vehicle's position. [0073] Another suitable type of perception sensor involves vision, such as in the form of optical samples from a camera or similar apparatus. Vison sensors provide the system with visual data to perceive the scene components and facilitating tasks such as object recognition, terrain analysis, and even decision-making based on visual inputs. Data could come from either mono camera, stereo cameras, or thermal camera making the navigation system to interact with its environment in a way that mimics human vision. Mono cameras capture single images, providing visual data like human eyesight REFERENCE NO: TPI-079PCT US PATENT APPLICATION which are particularly useful for tasks like lane detection, traffic sign recognition, and object classification. Mono cameras are simpler, cost-effective, and require less computational power for data processing. On the other hand, stereo cameras are using two or more lenses, capture images from slightly different angles, allowing for depth perception through disparity maps which is ideal for 3D mapping, obstacle detection in depth, and enhanced environmental understanding. Stereo cameras provide more detailed spatial information, crucial for precise navigation and complex decision- making processes. The vision system interprets visual cues from the surroundings, essential for understanding the context of the environment. Combining visual data with other types of information, such as radar, GNSS, IMU, and odometer inputs, the system can achieve a multi-dimensional view of its surroundings, crucial for accurate positioning and navigation in complex urban environments. [0074] As described herein, perception components, imaging radar and vision, can form an advanced navigation system capable of operating with high efficiency and accuracy in a wide range of applications. As desired, other types of perception sensors can be employed alternatively or in addition. Correspondingly, the techniques of this disclosure represent a notable advancement in the field of navigation technology, addressing many of the limitations of existing systems while opening new possibilities for autonomous navigation. [0075] Map information, including high definition (HD) maps, are integral to the functionality of modern navigation systems, particularly in the realm of autonomous vehicles and advanced driver-assistance systems (ADAS). For example, HD maps provide comprehensive data about road geometry and topology, including specifics like lane markers, road signs, traffic signals, and even the condition of the road surface. This granular level of detail may be essential for autonomous vehicles, enabling them to navigate complex environments safely and efficiently. Some other maps are 2D or 3D maps such as point clouds or occupancy maps. Autonomous vehicles rely heavily on various sensors like cameras, radar, and lidar to recognize their surroundings. Maps can be generated based on either imaging radars data, vision data from mono/stereo/thermal cameras, or data from lidar or other types of perception sensors. According to the discussion below, a map can be pre-built from another type of pe-existing maps, or it can be pre-built from different navigation sessions, potentially involving data gathered REFERENCE NO: TPI-079PCT US PATENT APPLICATION from multiple devices, or a map may be built online during a current navigation session. Detections may be projected and then accumulated over time as in an imaging radar example or maps can be created from vision by collecting different scenes and stitching them together to create a 3D scene for the environment. A map can be built for the traversed area in real time, online during a given navigation session or offline after the data collection sessions end. [0076] The navigation system conventionally requires two types of maps to run and provide a solution; local map and a reference map, such as a global map or a semi- global map. The local map is the map created by the readings from the perception sensor and the navigation system during the real-time session by the navigation filter. Examples of real-time navigation filters include Kalman filter (KF) and Particle filters (PF). In case of using PF, a local map can be created for the overall navigation solution from the PF or for various particles in the PF). The local map could be generated from collected data over a short period of time from one epoch (one instant) up to a few seconds. On the other hand, a reference map (a global map or a semi-global map) is a map that covers a relatively bigger area than the local map around the current user location. In some cases, the global map is obtained from a pre-existing map for the area or can be pre-built, and the pre-existing or pre-built map can cover a group of building blocks, certain area of interest, or a city. This map is often divided into tiles or sections for easy manipulation for loading and unloading to the navigation system. The global map is built for a certain area based on collected data over time and may be from different sessions. For example, the global map can be pre-built using survey methods or crowdsourcing methods. In these cases for the pre-built global map, the data is collected from different routes to cover the traversed area at different times. The collected data is accumulated and filtered to build the global map. The global map tiles can be stored on a local device or on cloud. The area map can be retrieved to be used during the navigation session to provide an aid for the system versus the local map. [0077] The conventional requirement of a pre-existing map (also called a global map herein) for the traversed area to be able to provide an absolute navigation solution may be a drawback as there is a need to have the pre-built map to run the navigation system. However, the techniques of this disclosure may be employed without a pre- built map by first building a map online during a concurrent navigation session to allow REFERENCE NO: TPI-079PCT US PATENT APPLICATION for perception-based matching that matches to a model (such as a 2D/3D map) and integrating it with senor-based navigation using a tight nonlinear state estimation technique. To sum up, this invention provides a novel technique for perception-based positioning that matches to a model and tightly integrates with senor-based navigation without the need of using a pre-exiting map for the environment because it can be build an online map in-run during the session and then use it as a reference map (this can be called semi-global map or online map). [0078] Aspects of these techniques avoid the need for pre-existing maps or pre-built maps by using at least one perception sensor to build a map of the surrounding area. In the coming discussion, this map will be referred to as an online or semi-global map. In some embodiments discussed in detail below, the online map may be built when certain favorable conditions are satisfied or happen and some criteria are met. Some examples are when the integrated navigation solution provided by the system is favorable and meets certain criteria, or when certain conditions are met for the motion sensor readings or the perception sensor readings. In another example, the online map may be built when absolute navigational information (such as for example GNSS) quality is good and meets certain criteria. Next, the online map (or semi-global map) may then be used and matched to the subsequent measurements from the same or different perception sensor(s) and be directly integrated in a tightly coupled fashion with the state estimation technique of the integrated navigation solution to improve the navigation and positioning solution as needed. As will be appreciated, this technique provides an absolute positioning method utilizing the online map and the perception sensor measurements. This technique can be contrasted with relative navigation information aids such as perception odometry and perception-inertial odometry. Examples of perception odometry are visual odometry, radar odometry, or Lidar odometry; and examples perception-inertial odometry are visual-inertial odometry, radar-inertial odometry, or Lidar-inertial odometry. Absolute positioning methods are more accurate and more robust as compared to relative navigation methods (using relative navigation information aids), as the latter can accumulate errors over time and lack an absolute sense. The only benefit of relative navigation methods is that they do not require any pre-built map of the environment. The technique of this disclosure also does not require any pre-built map but is still an absolute positioning method using perception sensors, and it is an improvement when compared to relative navigation methods such as REFERENCE NO: TPI-079PCT US PATENT APPLICATION perception odometry and perception-inertial odometry. Relative navigation methods cannot provide the same accuracy, longevity, reliability, and robustness as compared to an absolute positioning technique. [0079] In other words, to solve the problem of the need for the pre-built map, an online map building technique is used in parallel while the navigation solution is running in real-time. The online map is a map to be created during the navigation session for a period that may be a little bit longer as compared to the local map time period. The online map may cover a relatively smaller area during the navigation session as compared to the previously discussed pre-existing or pre-built global map. [0080] As will be discussed in detail later below, the online map may be built based on one or more favorable navigation conditions being satisfied. For example, the favorable conditions may be determined based on the motion sensor data, the perception sensor data, and/or the integrated navigation solution. Other available information may as be used as desired. Further, another possible source for determining a favorable condition to build the online map may be based at least in part on the quality of absolute navigational information, such as when a given location has a good GNSS signal (meeting certain criteria or conditions). When the absolute navigational information (such as GNSS) is favorable and meeting certain conditions, the integrated navigation solution will have a more superior quality (even better than the absolute navigational information alone). Consequently, those can be very favorable conditions to use the integrated navigation solution to build the online (or semi-global) map together with the perception sensor(s) measurements. With the online map, the system can run without the need for the pre-existing map and without the need to download such a global map from the cloud (for example). This can save time and cost for the navigation system. As mentioned above, the techniques of this disclosure may also provide benefits when compared to relative positioning techniques such as visual odometry, visual inertial odometry, radar odometry, and radar inertial odometry. Those techniques just mentioned are all relative techniques, they cannot provide the same accuracy, longevity, reliability, and robustness as compared to an absolute positioning technique as discussed earlier. [0081] Integrating the perception sensor data with the motion sensor data according to the techniques of this disclosure involves modelling the perception sensor data. One REFERENCE NO: TPI-079PCT US PATENT APPLICATION approach is to model measurements based on the perception sensor characteristics and environmental factors (i.e., exact model). This approach is not always effective because the derived model could be a function of unknown states or reaching an accurate model could consume a lot of resources. Accordingly, another approach is to use a probabilistic approach to build the measurement model using nonlinear techniques. As will be described in further detail below, suitable measurement models include a range- based model based at least in part on a probability distribution of measured ranges using an estimated state of the platform and the map information, a nearest object likelihood model based at least in part on a probability distribution of distance to an object detected using the perception sensor data, an estimated state of the platform and a nearest object identification from the map information, a map matching model based at least in part on a probability distribution derived by correlating a reference map, such as the online map that is built during the concurrent navigation session, to a local map generated using the perception sensor data and an estimated state of the platform, and a closed-form model based at least in part on a relation between an estimated state of the platform and ranges to objects from the map information derived from the perception sensor data. [0082] To help illustrate the techniques of this disclosure, FIG.3 depicts an exemplary routine for building an online map using perception sensor data and then integrating perception sensor data and motion sensor data using the online map to provide an integrated navigation solution for a device within a moving platform. Although described in the context of device 100 as depicted in FIG.1, other architectures, including the one shown in FIG. 2, may be used as desired with the appropriate modifications. Initially, motion sensor data may be obtained for device 100, such as from sensor assembly 106. In one aspect, the sensor data may be inertial sensor data from one or more accelerometers, gyroscopes or other suitable motion and/or orientation detection sensors. Next, perception sensor data is obtained for the platform, which as noted above, may be from any one or more types of perception sensors. An integrated navigation solution is generated based at least in part on the obtained motion sensor data so that the perception sensor data may be used to build an online map. Once the online map has been built, the integrated navigation solution may be revised by directly integrating perception sensor data with motion sensor data. As described in the following material, a nonlinear state estimation technique may be employed that is REFERENCE NO: TPI-079PCT US PATENT APPLICATION configured to use a nonlinear measurement model such as those noted above. Consequently, the revised integrated navigation solution may then be provided when the revised integrated navigation solution is available. Otherwise, the integrated navigation solution may be provided when the revised integrated solution is not available. [0083] A further example of the techniques of this disclosure is schematically depicted in FIG. 4. Again, an online map is built using perception sensor data and then perception sensor data and motion sensor data are integrated using the online map to provide an integrated navigation solution for a device within a moving platform. Similar to the embodiments discussed above, motion sensor data may be obtained that represents motion of the platform. Perception sensor data is also obtained for the platform. An integrated navigation solution is generated based at least in part on the obtained motion sensor data so that the perception sensor data may be used to build an online map. Once the online map has been built, the integrated navigation solution may be revised by directly integrating perception sensor data with motion sensor data. As shown, revising the integrated navigation solution may involve using the obtained motion sensor data in the nonlinear state estimation technique and integrating perception sensor data directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the online map information. Consequently, the revised integrated navigation solution may then be provided when the revised integrated navigation solution is available. Otherwise, the integrated navigation solution may be provided when the revised integrated solution is not available. [0084] Therefore, the techniques of this disclosure include a method for providing an integrated navigation solution in real-time for a device within a moving platform. The method may involve obtaining motion sensor data from a sensor assembly of the device and obtaining perception sensor data from at least one perception sensor for the platform. An integrated navigation solution for the platform may be generated based at least in part on the obtained motion sensor data so that an online map for an area encompassing the platform in a first instance of time using perception sensor data may be built based at least in part on the integrated navigation solution during the first instance of time. The integrated navigation solution may then be revised in a second instance of time based at least in part on the motion sensor data using a nonlinear state estimation technique. The nonlinear state estimation technique may use a nonlinear REFERENCE NO: TPI-079PCT US PATENT APPLICATION measurement model for perception sensor data, such that integrating the motion sensor data and perception sensor data in the nonlinear state estimation technique is tightly- coupled. Revising the integrated navigation solution may involve using the obtained motion sensor data in the nonlinear state estimation technique and integrating perception sensor data directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the online map information. The revised integrated navigation solution may then be provided when available and the integrated navigation solution may be provided when the revised integrated navigation solution is not available. [0085] In one aspect, the at least one perception sensor is at least one of radar, an optical camera, lidar, a thermal camera, an IR camera or an ultrasonic sensor. The at least one perception sensor may be at least one radar that outputs radar measurements for the platform. The at least one perception sensor may be at least one optical sensor that outputs optical samples for the platform. The at least one perception sensor may be at least one radar that outputs radar measurements for the platform and at least one optical sensor that outputs optical samples for the platform. [0086] In one aspect, depth information for objects detected within the optical samples may be determined using at least one of: i) estimating depth for an object and deriving range, bearing and elevation; and ii) obtaining depth readings for an object from the at least one optical sensor and deriving range, bearing and elevation. A scene reconstruction operation may be determined for a local area surrounding the platform based at least in part on the determined depth information for objects detected within the optical samples. [0087] In one aspect, the at least one perception sensor may include at least two types of perception sensors and wherein building the online map comprises using one type of perception sensor and wherein integrating the motion sensor data comprises integrating data from another type of perception sensor directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the data from the another type of perception sensor in the nonlinear state estimation technique. [0088] In one aspect, building an online map for an area encompassing the platform REFERENCE NO: TPI-079PCT US PATENT APPLICATION may be based at least in part on satisfaction of a favorable condition. [0089] In one aspect, the measurement model comprises at least one of: i) a range- based model based at least in part on a probability distribution of measured ranges using an estimated state of the platform and the online map; ii) a nearest object likelihood model based at least in part on a probability distribution of distance to an object detected using the optical samples, an estimated state of the platform and a nearest object identification from the online map; iii) a map matching model based at least in part on a probability distribution derived by correlating the online map to a local map generated using the optical samples and an estimated state of the platform; and iv) a closed-form model based at least in part on a relation between an estimated state of the platform and ranges to objects from the online map. [0090] In one aspect, the nonlinear measurement model further comprises models for perception sensor errors comprising any one or any combination of environmental errors, sensor-based errors and dynamic errors. [0091] In one aspect, the nonlinear state estimation technique comprises at least one of: i) an error-state system model; ii) a total-state system model, wherein the integrated navigation solution is output directly by the total-state model; and iii) a system model receiving input from an additional state estimation technique that integrates the motion sensor data. The nonlinear state estimation technique may be an error-state system model such that providing the integrated navigation solution may involve correcting an inertial mechanization output with the updated nonlinear state estimation technique. The system model may be a system model receiving input from an additional state estimation technique, which integrates any one or any combination of: i) inertial sensor data; ii) odometer or means for obtaining platform speed data; iii) pressure sensor data; iv) magnetometer data; and v) absolute navigational information. The system model of the nonlinear state estimation technique may further include a motion sensor error model. [0092] In one aspect, the nonlinear state estimation technique may be at least one of: i) a Particle Filter (PF); ii) a PF, wherein the PF comprises a Sampling/Importance Resampling (SIR) PF; and iii) a PF, wherein the PF comprises a Mixture PF. [0093] In one aspect, a source of absolute navigational information may be used REFERENCE NO: TPI-079PCT US PATENT APPLICATION when generating the integrated navigation solution. Building the online map may be performed based at least in part on quality of the absolute navigational information. [0094] In one aspect, the method may also involve storing and retrieving the online map based on a current position of the platform. [0095] In one aspect, a misalignment between a frame of the sensor assembly and a frame of the platform may be determined, wherein the misalignment is at least one of: i) a mounting misalignment; and ii) a varying misalignment. [0096] In one aspect, a misalignment between a frame of the sensor assembly and a frame of the platform may be determined, wherein the misalignment is determined using any one or any combination of: i) a source of absolute velocity; ii) a radius of rotation calculated from the motion sensor data; and iii) leveled horizontal components of acceleration readings along forward and lateral axes from the motion sensor data. [0097] Further, this disclosure is also directed to a system for providing an integrated navigation solution in real-time for a device within a moving platform. The system may include a device having a sensor assembly configured to output motion sensor data, at least one perception sensor providing perception sensor data and at least one processor, coupled to receive the motion sensor data and the perception sensor data. The at least one processor may be operative to generate an integrated navigation solution for the platform based at least in part on the motion sensor data, build an online map for an area encompassing the platform in a first instance of time using perception sensor data based at least in part on the integrated navigation solution during the first instance of time, revise the integrated navigation solution in a second instance of time based at least in part on the motion sensor data using a nonlinear state estimation technique, wherein a prediction phase involving a system model is used to propagate predictions about a state of the platform and an update phase involving at least one measurement model relating measurements to the state is used to update the state of the platform, wherein the nonlinear state estimation technique comprises using a nonlinear measurement model for perception sensor data, wherein integrating the motion sensor data and perception sensor data in the nonlinear state estimation technique is tightly- coupled, and wherein the revising comprises: i) using the received motion sensor data in the nonlinear state estimation technique; and ii) integrating perception sensor data REFERENCE NO: TPI-079PCT US PATENT APPLICATION directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the online map information; and provide the revised integrated navigation solution when available and the integrated navigation solution when the revised integrated solution is not available. [0098] In one aspect, the at least one perception sensor is at least one of radar, an optical camera, lidar, a thermal camera, an IR camera or an ultrasonic sensor. The at least one perception sensor may be at least one radar that outputs radar measurements for the platform. The at least one perception sensor may be at least one optical sensor that outputs optical samples for the platform. The at least one perception sensor may be at least one radar that outputs radar measurements for the platform and at least one optical sensor that outputs optical samples for the platform. Further, the at least one perception sensor may include at least two types of perception sensors and wherein building the online map comprises using one type of perception sensor and wherein integrating the motion sensor data comprises integrating data from another type of perception sensor directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the data from the another type of perception sensor in the nonlinear state estimation technique. [0099] In one aspect, the measurement model comprises at least one of: i) a range- based model based at least in part on a probability distribution of measured ranges using an estimated state of the platform and the online map; ii) a nearest object likelihood model based at least in part on a probability distribution of distance to an object detected using the optical samples, an estimated state of the platform and a nearest object identification from the online map; iii) a map matching model based at least in part on a probability distribution derived by correlating the online map to a local map generated using the optical samples and an estimated state of the platform; and iv) a closed-form model based at least in part on a relation between an estimated state of the platform and ranges to objects from the online map. [00100] In one aspect, the nonlinear state estimation technique comprises at least one of: i) an error-state system model; ii) a total-state system model, wherein the integrated navigation solution is output directly by the total-state model; and iii) a system model receiving input from an additional state estimation technique that integrates the motion sensor data. The nonlinear state estimation technique may be an REFERENCE NO: TPI-079PCT US PATENT APPLICATION error-state system model such that providing the integrated navigation solution may involve correcting an inertial mechanization output with the updated nonlinear state estimation technique. [00101] In one aspect, the sensor assembly includes an accelerometer and a gyroscope. The sensor assembly may be implemented as a Micro Electro Mechanical System (MEMS). [00102] In one aspect, the system includes a source of absolute navigational information. [00103] In one aspect, the system includes any one or any combination of: A) an odometer or means for obtaining platform speed; B) a pressure sensor; C) a magnetometer. EXAMPLES [00104] It is contemplated that the present methods and systems may be used for any application involving integrating perception sensor data with motion sensor data to provide a navigation solution. Without any limitation to the foregoing, the present disclosure is further described by way of the following examples. 1 System Overview [00105] The embodiments of the navigation system and techniques of this disclosure comprise integration of four key technologies: Inertial Measurement Unit (IMU), Global Navigation Satellite System with Real-Time Kinematic (GNSS-RTK), odometer, and perception sensors. Each component has been carefully selected and optimized to work in concert with the others, resulting in a navigation solution that is greater than the sum of its parts. One exemplary system architecture is schematically depicted in FIG. 5. Particularly, the Inertial Measurement Unit (IMU) system can provide highly accurate acceleration and rotation data in three dimensions. Its advanced algorithms allow for precise motion detection and orientation, crucial in environments where GNSS data might be unreliable. The Global Navigation Satellite System Real- Time Kinematic (GNSS-RTK) offers unparalleled positional accuracy. By combining signals from global satellites with real-time corrections from ground-based reference stations, this system achieves a level of precision in location tracking that is unmatched REFERENCE NO: TPI-079PCT US PATENT APPLICATION by standard GPS systems. The integration of an advanced odometer or any other means for obtaining platform speed allow the system to measure distance traveled with high accuracy. This data is particularly useful for incremental positioning and speed measurement, supplementing the information provided by the GNSS-RTK and IMU. [00106] Imaging radar may play a pivotal role in the navigation techniques of this disclosure. In areas where GNSS signals are weak or obstructed, such as urban canyons or under dense foliage, radar data can supplement positioning information. By mapping the surrounding environment, the radar helps in maintaining positional accuracy even when GNSS data is compromised. The integration of radar allows for a more comprehensive understanding of the platoform’s spatial orientation and movement. Utilizing the radar's environmental mapping capabilities, the system can identify fixed landmarks or features, assisting in triangulating position. [00107] Vision sensors provide the system with visual data to perceive the scene components and facilitating tasks such as object recognition, terrain analysis, and even decision-making based on visual inputs. Data could come from either mono camera, stereo cameras, or thermal camera making the navigation system to interact with its environment in a way that mimics human vision. Mono cameras capture single images, providing visual data like human eyesight which are particularly useful for tasks like lane detection, traffic sign recognition, and object classification. Mono cameras are simpler, cost-effective, and require less computational power for data processing. On the other hand, stereo cameras are using two or more lenses, capture images from slightly different angles, allowing for depth perception through disparity maps which is ideal for 3D mapping, obstacle detection in depth, and enhanced environmental understanding. Stereo cameras provide more detailed spatial information, crucial for precise navigation and complex decision-making processes. The vision system interprets visual cues from the surroundings, essential for understanding the context of the environment. Combining visual data with radar, GNSS-RTK, IMU, and odometer inputs, the system can achieve a multi-dimensional view of its surroundings, crucial for accurate positioning and navigation in complex urban environments. [00108] As such, perception components, imaging radar and vision, can form an advanced navigation system capable of operating with high efficiency and accuracy in a wide range of applications. The system embodiments of this disclosure represent a REFERENCE NO: TPI-079PCT US PATENT APPLICATION notable advancement in the field of navigation technology, addressing many of the limitations of existing systems while opening new possibilities for autonomous navigation. 2 System Operation 2.1 GNSS Integration [00109] Global Navigation Satellite System (GNSS) technology represents a significant advancement in high-precision satellite positioning. The essence of GNSS lies in its ability to correct for various sources of error that typically affect GNSS signals, such as atmospheric disturbances, satellite clock errors, and orbital inaccuracies. This is achieved through complex modeling and the use of differential techniques, where the reference station data is used to improve the accuracy of satellite position measurements. GNSS can suffer from outage due to high buildings such as downtown areas or underground areas such as underground parkades, under bridges, and in tunnels. The signal can be lost completely to make a full outage, or it can be lost partially to create partial outage. The results from the GNSS system will not be available during the full outages and will be degraded during the partial outages. [00110] Global Navigation Satellite System Real-Time Kinematic (GNSS-RTK) is usually used to enhance the precision of position data derived from satellite-based positioning systems. This technique can improve the accuracy of GNSS systems to centimeter-level, which is significantly more precise than the meter-level accuracy achievable with standard GNSS. The RTK system works by using a fixed base station that knows its exact location to correct positioning errors caused by atmospheric interferences, timing errors, and satellite orbit errors. The base station broadcasts real- time corrections to a mobile receiver, which uses these corrections to compute its precise position. [00111] Precise Point Positioning (PPP) is another technique which is being used in geodesy and navigation that allows GPS and GNSS receivers to determine their location on Earth with a high degree of accuracy, typically within a few centimeters or decimeters. Unlike the Real-Time Kinematic (RTK) positioning method, which requires a fixed base station nearby to provide corrections, PPP can achieve its high accuracy without the need for a local reference station. This is accomplished by using precise REFERENCE NO: TPI-079PCT US PATENT APPLICATION satellite orbit and clock data. PPP relies on correction data that is either received in real- time via a communication link or applied post-mission in a process known as post- processing. These corrections account for various error sources affecting satellite navigation signals, including satellite clock and orbit errors, atmospheric delays (ionospheric and tropospheric), and other systemic biases. The key advantages of PPP include its global availability, as it does not depend on the proximity to a base station, and its ability to provide high-accuracy positioning anywhere in the world. [00112] The positioning of a moving platform, such as wheelbased platforms/vehicles or individuals, is commonly achieved using known reference-based systems, such as GNSS. The GNSS comprises a group of satellites that transmit encoded signals to receivers on the ground that, by means of trilateration techniques, can calculate their position using the travel time of the satellites’ signals and information about the satellites’ current location. Such positioning techniques are also commonly utilized to position the moving platform. [00113] To achieve more accurate, consistent and uninterrupted positioning information, GNSS information may be augmented with additional positioning information obtained from complementary positioning systems. Such systems may be self-contained and/or “non-reference based” systems within the device or the platform, and thus need not depend upon external sources of information that can become interrupted or blocked. [00114] Inertial motion sensors are “non-reference based” systems which provide measurements to a vehicle navigation system. Notably, motion sensor data includes information from accelerometers, gyroscopes, or other implementations of an Inertial Measurement Unit (IMU). Inertial sensors are self-contained sensors that use gyroscopes to measure the rate of rotation/angle, and accelerometers to measure the specific force (from which acceleration is obtained). Inertial sensors data may be used in an inertial navigation system (INS), which is a non-reference based relative positioning system. The primary challenge in IMU data processing is to accurately determine the position, velocity, and orientation of an object, often in 3D space, from the raw sensor outputs. [00115] One such “non-reference based” or relative positioning system is the REFERENCE NO: TPI-079PCT US PATENT APPLICATION inertial navigation system (INS). Using initial estimates of position, velocity and orientation angles of the device or platform as a starting point, a mechanization algorithm is used to get the orientation angle from 3D gyroscope and the linear acceleration, velocity, and position from 3D accelerometer. Typically, measurements are integrated once for gyroscopes to yield orientation angles and twice for accelerometers to yield position of the device or platform incorporating the orientation angles. Thus, the measurements of gyroscopes will undergo a triple integration operation during the process of yielding position. Inertial sensors alone, however, are unsuitable for accurate positioning because the required integration operations of data results in positioning solutions that drift with time, thereby leading to an unbounded accumulation of errors. [00116] Where available, another known complementary “nonreference based” system is a system for measuring speed/velocity information such as, for example, odometric information from a odometer within the platform. Odometric data can be extracted using sensors that measure the rotation of the wheel axes and/or steer axes of the platform (in case of wheeled platforms). Wheel rotation information can then be translated into linear displacement, thereby providing wheel and platform speeds, resulting in an inexpensive means of obtaining speed with relatively high sampling rates. Where initial position and orientation estimates are available, the odometric data are integrated thereto in the form of incremental motion information over time. [00117] Given that the positioning techniques described above (whether INS/GNSS or INS/GNSS/Speed Information) may suffer loss of information or errors in data, common practice involves integrating the information/data obtained from the GNSS with that of the complementary system(s). For instance, to achieve a better positioning solution, INS and GNSS data may be integrated because they have complementary characteristics. INS readings are accurate in the short-term, but their errors increase without bounds in the long-term due to inherent sensor errors. GNSS readings are not as accurate as INS in the short-term, but GNSS accuracy does not decrease with time, thereby providing long-term accuracy. Also, GNSS may suffer from outages due to signal blockage, multipath effects, interference or jamming, while INS is immune to these effects. [00118] Speed information from the odometric readings, or from any other REFERENCE NO: TPI-079PCT US PATENT APPLICATION source, may be used to enhance the performance of the integrated INS/GNSS solution by providing velocity updates, however, current INS/GNSS/Odometry systems continue to be plagued with the unbounded growth of errors over time during GNSS outages. [00119] One reason for the continued problems is that commercially available navigation systems using INS/GNSS integration or INS/GNSS/Odometry integration rely on the use of traditional Kalman Filter (KF)-based techniques for sensor fusion and state estimation. The KF is an estimation tool that provides a sequential recursive algorithm for the estimation of the state of a system when the system model is linear. [00120] As is known, the KF estimates the system state at some time point and then obtains observation “updates” in the form of noisy measurements. KF equations may be considered time update or “prediction” equations that are used to project forward in time the current state and error covariance estimates to obtain an a priori estimate for the next step or measurement update or “correction” equations that are used to incorporate a new measurement into the a priori estimate to obtain an improved posteriori estimate. [00121] The INS/GNSS integration problem at hand has nonlinear models. Thus, to utilize the linear KF estimation techniques in this type of problem, the nonlinear INS/GNSS model has to be linearized around a nominal trajectory. This linearization means that the original (nonlinear) problem can be transformed into an approximated problem that may be solved optimally, rather than approximating the solution to the correct problem. The accuracy of the resulting solution can thus be reduced due to the impact of neglected nonlinear and higher order terms. These neglected higher order terms are more influential and cause error growth in the positioning solution, in degraded and GNSS-denied environments, particularly when low-cost MEMS-based IMUs are used. [00122] In addition, the traditional INS typically relies on a full inertial measurement unit (IMU) having three orthogonal accelerometers and three orthogonal gyroscopes. This full IMU setting has several sources of error, which can cause severe effects on the positioning performance. The residual uncompensated sensor errors, even after KF compensation, can cause position error composed of three additive quantities: (i) proportional to the cube of GNSS outage duration and the uncompensated horizontal REFERENCE NO: TPI-079PCT US PATENT APPLICATION gyroscope biases; (ii) proportional to the square of GNSS outage duration and the three accelerometers uncompensated biases, and (iii) proportional to the square of GNSS outage duration, the horizontal speed, and the vertical gyroscope uncompensated bias. [00123] In addition to the previous sensors/systems, barometers play a significant role in height estimation which may be used with the previous mentioned navigation systems to enhance vertical accuracy, especially for off-road and autonomous vehicles. By calibrating the barometer against known altitudes or sea level pressure, the navigation system can estimate the vehicle elevation with a higher degree of accuracy. This is crucial in applications that need to maintain a specific altitude above ground level, or in off-road vehicles navigating through varied terrain, to provide precise height information. The use of barometers for height estimation in vehicle navigation provides an additional layer of precision to navigational systems in environments where vertical positioning is as critical as horizontal. 2.2 Radar System [00124] Radars have numerous characteristics based on the signals used by the sensor, the covered area/volume by the radar, the accuracy and resolution of radar range/bearing, and the type of measurements logged by the sensor. Unlike traditional radars, imaging radars utilize sophisticated beamforming techniques to create high- resolution, two or three-dimensional images of the environment. The configuration often includes multiple transmitting and receiving elements, enabling the system to cover a wide field of view and capture detailed spatial information. The data processing and filtering stage in imaging radars involves the conversion of raw radar signals into meaningful spatial data. Techniques like doppler processing are used to determine the velocity of objects, while sophisticated data fusion algorithms can integrate radar data with information from other sensors for a more comprehensive environmental understanding. In terms of modeling for navigation state estimation, imaging radars are pivotal. They provide critical data inputs for algorithms that estimate the vehicle's position, velocity, and trajectory. The integration of imaging radar data into navigation systems requires the development of a model to use its data. [00125] There are different characteristics and the types of radar sensors to be considered in the radar measurement model. First aspect, choosing between pulse-based REFERENCE NO: TPI-079PCT US PATENT APPLICATION and Frequency Modulated Continuous Wave (FMCW) Radars. Pulse-based radars do not require complex computations like FMCW-based radars. Moreover, there is no doppler-range coupling like in the case of some modulation FMCW-based modulation scheme. However, pulse-based radars leak power to adjacent bands, limiting the reuse of frequencies. This is a limitation of pulse-based radars because autonomous vehicles will be operating in the close vicinity, especially in urban areas. Second aspect, select the best radar operating band from the most two prominent operating bands; the 24 GHz and the 77 GHz. The choice of the operating band will affect the choice of the range accuracy and resolution of the radar alongside with the dimensions of the radar antenna. Third aspect, the provided radar measurements with the radar signal. The radar signal processing unit estimates the range and doppler from the received reflections including the azimuth/bearing angle and elevation angle. By grouping range, doppler and power measurements from adjacent cells, a software layer might be able to estimate the centroid of all targets. All this data can be used to derive a more accurate radar measurement model. One more aspect, categorizing radars based on their range and Field of View (FOV). This leads to three types of radars: Long Range Radar (LRR), Medium Range Radar (MRR) and Short-Range Radar (SRR). The Field of View (FOV) of the radar is equivalent to the beamwidth in azimuth and elevation. But some manufacturers are also working on 3D radars that have wider azimuth and elevation beamwidth. Moreover, some automotive radars can scan medium range and long range simultaneously. 2.3 Vision System [00126] Vision systems often comprise stereo cameras or multiple monocular cameras strategically positioned to capture a wide field of view. The configuration is designed to mimic human vision, providing depth perception and a comprehensive visual understanding of the surroundings. The data processing and filtering stage in vision systems is complicated. Initially, raw visual data undergoes preprocessing to adjust for variations in lighting, contrast, and to filter out noise. Advanced computer vision algorithms then analyze these images, detecting and classifying objects. Feature extraction techniques are important at this stage, enabling the system to identify key elements in the visual data that are relevant for navigation. Then a feature matching technique is used to identify the similarity between the different scenes which represent REFERENCE NO: TPI-079PCT US PATENT APPLICATION the maps. Scene reconstruction is used to construct a dynamic 3D model of the vehicle's environment. This model is continuously updated as new visual data is captured, providing a real-time, detailed understanding of the surroundings. [00127] When employing an optical sensor outputting optical samples for the platform, different methods may be employed for determining depth information for objects within the optical samples. For example, depth may estimated to provide depth data for objects detected within the optical samples. From the depth estimation, range, bearing and elevation for the objects may be extracted and fed to the nonlinear state estimation technique. In some embodiments, the depth information can be estimated using neural network techniques. Alternatively or in addition, depth readings may be directly available from the at least one optical sensor, such as when using stereo sensors or when a stream of samples are available. Similarly, range, bearing and elevation for objects within the samples may be extracted and fed to the nonlinear state estimation technique. Neural network techniques can also be applied. As yet another alternative or additional source of information, a scene representing a local environment surrounding the platform may be reconstructed based on information from the at least one optical sensor, including the depth information discussed above. Then, the reconstructed local map can be compared to a reference map, such as the online map built according to the techniques of this disclosure. Once more, neural network techniques can be employed during scene reconstruction. More explanation regarding this can be found later below in section 2.5.6. [00128] For navigation state estimation, vision systems provide essential information for determining the vehicle’s position relative to the road and other objects. Modeling for navigation involves integrating visual data with inputs from other sensors, like GNSS, IMUs, and odometry, to accurately estimate the vehicle's current state and predict future states. This integration is essential for making informed navigation decisions, ensuring safety, and enhancing the overall efficiency of the autonomous system. [00129] Map building using vision samples may involve vision-based scene reconstruction, a sophisticated process in computer vision and robotics that is integral to understanding and interacting with the environment. This operation involves capturing visual data from the environment using cameras, which could be monocular, stereo, or a REFERENCE NO: TPI-079PCT US PATENT APPLICATION more complex array of cameras for a wider field of view and depth perception. The core of scene reconstruction lies in converting these two-dimensional images into a three- dimensional model of the scene. In the reconstruction process, features from multiple images are extracted, matched, and triangulated to create a 3D representation. This process involves significant computational challenges, particularly in dealing with large datasets, ensuring accurate feature matching, and managing changes in lighting and perspective. The application of vision-based scene reconstruction is being used in autonomous vehicle navigation, where an accurate 3D model of the surrounding environment is built. It helps to capture the surrounding scene and reconstruct as a 3D map with objects and features. [00130] Algorithms like Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM) are commonly used for scene reconstruction. SfM reconstructs the 3D structure of a scene by analyzing the motion of the camera, while SLAM simultaneously maps the environment and tracks the camera's location within it. Generative Adversarial Networks (GANs) can generate realistic 3D models by learning the distribution of real-world objects and environments. GANs are particularly useful for filling in gaps in data, enhancing the detail of reconstructed scenes, or generating textures. GANs can work with SfM or SLAM to improve their accuracy and efficiency, allowing for the dynamic reconstruction of 3D scenes in real-time from moving cameras. [00131] In addition, point cloud processing, powered by Artificial Intelligence (AI) is another critical development in 3D scene reconstruction. Deep learning models, specifically designed to handle the irregular format of point clouds, enable the accurate classification, segmentation, and reconstruction of 3D spaces from sparse and noisy sensor data. This approach provides understanding for the 3D structure of the environment which is essential for the navigation system. 2.4 Map Information [00132] One known type of map is a location-based map, in which the map is represented as a set of objects in set ^, where the ^^^ object denoted by ^^ = ^^^,^^,^^ in set ^, is a 3D location in the map. Here, ^^^,^^,^^ is the Cartesian coordinates represented by the ^^^ element. Each location-object can contain other attributes REFERENCE NO: TPI-079PCT US PATENT APPLICATION describing the object. A distinctive characteristic of location-based maps is that the list of objects in the set ^ are indexed by their location instead of any other attribute. The main advantage of location-based maps is that every location in the map is represented, hence, the map has a full description of empty and non-empty locations in the map. A well-known example of Location-based maps is an Occupancy Grid Map (OGM), where the real world is discretized into squares (in the case of 2D maps) or cubes (in the case of 3D maps). The objects in the OGM map are the locations of the center-point of the squares/cubes, where each location-object might have several attributes. For examples, one attribute could reflect whether the squares/cubes are occupied or empty (alternatively this attribute could reflect whether the squares/cubes are occupied, empty or unmapped), another attribute could contain the expected measurements vector of a specific sensor at the current location-object. [00133] Another known type of map is a feature-based map, in which the map is represented as a set of objects in set ^, where the ^^^ object denoted by ^^ is a specific feature-object in the map. In other words, a feature-based map is a of objects that somehow represent certain features in the environment. These objects usually have several attributes including the location of the object. A distinctive characteristic of a feature-based map is that only selective locations of the environment are represented in ^. Feature-based maps can either be sparse or dense depending on the number of feature-objects across the map. Moreover, the feature-objects can be uniformly distributed (or any other distribution) across different locations in a map, or feature- objects can be congested in specific locations. Finally, the uniqueness of each of the feature-objects is another characteristic of a feature-map. These characteristics affect how useful the feature-map can be for localization purposes. A feature-based map that consists of dense, unique and uniformly distributed feature-objects (across locations in a map) are generally favorable characteristics for localization systems. [00134] Map information as used herein may also be optical signature maps which include optical signatures regarding objects in the environment. As will be appreciated, the geometry of the objects in an environment will be represented in the optical samples, which can be considered an optical signature for this section of the environment, and hence detected section of objects can be inferred or identified by comparing the optical signature to the object signature. Matches may allow the REFERENCE NO: TPI-079PCT US PATENT APPLICATION assumption that a section detected by the optical sensor is the section of objects that maximizes the correlation between the optical signature and a specific object geometry or object signature in the map. [00135] Different types of maps can be built, such as 2D maps and 3D maps. For example, some perception sensors may output information in 2D while others may output information in 3D. As such, building the map may involve projecting 3D measurements onto a 2D map. The system can create maps with/without some features such traffic lights and bridges. Also, it can update the map to filter out the unwanted objects such as parked cars. 2.5 Tightly-coupled Perception Sensor Integration And Online Map [00136] Notably, embodiments of this disclosure involve providing a navigation solution with nonlinear Perception/INS/GNSS integration. Perception-based matching to a model and tight integration with senor-based navigation may be provided without the need of using a pre-exiting map for the environment. One exemplary illustration is schematically depicted in FIG.6. As shown in the figure, the perception data is used to create a local map when initially generating a navigation solution for the platform as well as being used to build an online (semi-global) map. The navigation system model can use different maps to update the navigation states. Perception sensor data is processed using advanced algorithms to construct online, detailed 2D/3D maps of the surrounding environment. These digital maps help in enhancing the vehicle's perception system, especially in challenging conditions. In addition, the online maps can be used in the form of 3D maps or to be converted to 2D maps and used. Online maps can be built using an individual perception sensor data or a combination of different perception sensors. 2.5.1 GNSS Quality [00137] In some embodiments, an aspect of the online map building process involves detecting areas that are suitable to build sub-maps that in turn are stitched or merged into the online map as discussed in further detail below. One representative routine is schematically depicted in FIG.7 and includes operations associated with evaluating GNSS quality. As indicated, the quality of absolute position information for the platform is determined, so that the online (semi-global) map is built during periods REFERENCE NO: TPI-079PCT US PATENT APPLICATION with good GNSS, otherwise the routine recycles until quality improves. Sub maps built during periods of good GNSS are then stitched during the building process. Good GNSS detection logic helps identify the areas with good Global Navigation Satellite System GNSS signals. Creating a logic for choosing or detecting areas with good GNSS reception is important for optimizing the performance of navigation systems. This logic involves assessing various environmental and technical factors that influence GNSS signal quality. Key factors include line-of-sight to satellites, which can be obstructed by tall buildings, dense foliage, or mountainous terrain, leading to what's known as urban canyons or foliage attenuation effects. [00138] The logic utilizes the GNSS real-time data such as the GNSS position standard deviation values and the GNSS velocity standard deviation values. In addition, the RTK status can help to identify the areas with qualified GNSS signals. The RTK status can be fixed, float, or others. The good GNSS flag indicates that the current area has high probability for a good performance and accurate navigation status. Once an area has a good GNSS status, the system can start building the semi-global map for this area. [00139] Another criterion that may be considered when evaluating GNSS quality is satellite geometry and constellation status. The relative positioning of satellites can greatly affect GNSS accuracy. A concept known as Dilution of Precision (DOP) measures this effect. A logic system can use real-time data on satellite constellations to predict areas where satellite geometry may lead to higher DOP and poorer accuracy. The number of satellites visible in different systems (GPS, GLONASS, Galileo, etc.) may also be considered as more satellites generally mean better coverage. [00140] Signal multipath effects are another criterion that may be considered. In urban environments, GNSS signals can reflect off surfaces like buildings and roads, leading to multipath errors. Advanced algorithms can be developed to predict areas where these effects are likely to be significant, based on urban layout and the density of reflective surfaces. [00141] Atmospheric conditions are yet another criterion. Since satellite signals degrade as they pass through the atmosphere, real-time atmospheric data (like ionospheric disturbance levels) should be incorporated into the logic. This can be REFERENCE NO: TPI-079PCT US PATENT APPLICATION sourced from weather data and specialized atmospheric monitoring stations. [00142] Machine learning and predictive analysis are other techniques that may be used when evaluating GNSS. Utilizing machine learning algorithms to analyze historical data can significantly enhance the prediction accuracy. These algorithms can identify patterns and correlations between environmental factors and GNSS signal quality, leading to more reliable predictions. [00143] Yet another consideration includes user feedback and crowdsourcing. Incorporating user feedback and crowdsourced data can also improve the logic. Users in different locations can provide real-time feedback on GNSS signal quality, which can be used to update the system continuously. [00144] By considering these additional aspects, the logic for detecting areas with good GNSS reception becomes more dynamic and robust, leading to improved reliability and accuracy to build the semi-global map for the navigation systems from the perception data. 2.5.2 Map Stitching [00145] Map stitching refers to creating a relatively larger map from relatively smaller maps. The map can be generated using perception data which can be from imaging radar, vision, or lidar. The map could be built in the form of 2D or 3D. Different techniques are used to create the larger map from the smaller maps. For example, map stitching can be used to create a semi-global map from a group of smaller maps such as local maps. [00146] 2D map stitching can be done by building the map for a large area from an overlapped or non-overlapped group of sub-maps of small areas. The stitching algorithm accumulates the 2D points or detections, from all sub-maps and combines all of them to make one large map. The 2D stitching algorithm provides a clean map with the proper features to describe the mapped area. Moreover, it can remove any duplication in the detections if found. [00147] On the other hand, the 3D map stitching refers to using multiple 3D semi-global scenes or 3D sub-maps to build a comprehensive view of a large area which is called a global scene or 3D global map. The process begins with the collection of REFERENCE NO: TPI-079PCT US PATENT APPLICATION overlapping semi-global maps which could be built during the real-time navigation session at areas with good GNSS status. The core challenge in 3D map stitching is ensuring alignment and continuity among the individual sub-maps. [00148] Advanced algorithms are employed to detect and match key points or features across the different maps. Once matching points are identified, the maps are geometrically aligned and transformed to ensure that they fit together without visible seams or distortions. This transformation often involves adjusting for differences in scale, orientation, and perspective. [00149] The technology has evolved with advancements in machine learning and artificial intelligence, enabling more sophisticated and automated stitching processes. AI-driven 3D map stitching approaches can leverage the strength of various algorithms and models to address the complexities of working with 3D point clouds, facilitating the creation of larger, detailed, and more accurate 3D map from multiple sub-maps. [00150] An AI-driven technique for 3D map stitching or fusion is considered to deal with 3D point clouds to create a larger, cohesive map from sub-maps. This technique involves the integration of multiple 3D point cloud datasets, which are often captured from different viewpoints or at different times, to construct a comprehensive, unified 3D model of the environment. The AI model can classify, segment, and recognize features within individual point clouds, making it easier to identify matching features across different sub-scenes. By recognizing these correspondences, the AI can accurately align and merge the sub-scenes into a single, coherent 3D environment. [00151] To enhance the precision of the stitching process, techniques such as Iterative Closest Point (ICP) algorithm and its variants are often employed in tandem with AI methods. These algorithms iteratively refine the alignment of two-point clouds by minimizing the distance between corresponding points, further improved by machine learning models that predict the best initial alignment based on learned spatial patterns. Once the point clouds are aligned and merged, deep learning can also be applied for post-processing tasks such as denoising, hole filling, and enhancing the resolution of the reconstructed scene. This ensures that the final 3D model is not only accurate in terms of structure and alignment but also exhibits high-quality visual and geometric details. 2.5.3 Projecting 3D Map to 2D Map REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00152] Creating a 2D map from a 3D map through projection is a transformative process that involves translating three-dimensional spatial data into a two-dimensional representation. The projection step is an optional step in the system where it enables the system to be able to use 2D maps. The key challenge in this projection process is to accurately represent the three-dimensional features of the terrain, such as elevation, slopes, and structures, onto a flat surface while minimizing distortion. There are several methods of projection, each with its own approach to handling the complexities of this transformation. Popular methods include orthographic, stereographic, and Mercator projections, each suited to different types of maps and purposes. For instance, orthographic projection is often used for city maps, where maintaining the accurate portrayal of building layouts is essential, while Mercator projections are widely used for navigational maps due to their ability to represent lines of constant course. Projection also allows use of multiple types of sensors that may differ in the dimensions that are perceived, for example radar measurements may be in 2D and scene reconstruction from optical samples or other visual images may be in 3D. [00153] The process begins by selecting the appropriate projection based on the map’s intended use and the area it covers. The 3D data, which may come from sources like perceptions sensors is then processed. This involves mathematical transformations where points from the 3D model are projected onto a 2D plane. This step requires careful handling to preserve important spatial relationships and to ensure scale consistency across the map. One of the critical aspects of this process is dealing with distortions inherent in projecting a curved surface (like the Earth) onto a flat plane. Different projection techniques prioritize preserving different properties, such as area, shape, distance, or direction. For example, some projections maintain area accuracy but distort shapes, particularly near the map edges. The resulting 2D map provides a useful and practical way to visualize and interact with spatial data, making complex 3D information accessible and interpretable for various applications. [00154] The projection from 3D to 2D technique enables the navigation system to overcome the time consuming and costly process of working with 3D maps. This enables the system to convert the 3D scene from the vision system to be used as a 2D map which will accelerate the process for the navigation system. In addition, this will enable the use of the vision maps with the 2D radar maps to build the model for the REFERENCE NO: TPI-079PCT US PATENT APPLICATION navigation filter. 2.5.4 Map Storage and Retrieval System [00155] Implementing a retrieval system to dynamically load and unload parts of a large map during runtime is essential for efficient memory management, especially in applications like real-time navigation systems. The key to this system is a technique often referred to as chunked map loading. The large map is divided into smaller, manageable sub-maps, each representing a specific area of the overall map. These sub- maps are stored separately, and the system only loads the sub-maps needed for the current view or operation into memory. This approach significantly reduces memory usage, as only parts of the entire map is loaded at any given time. These concepts may be applied to the reference maps to which the local maps will be compared, either pre- existing global maps as used conventionally or to the online (semi-global) maps built during the current navigation session. The online maps may be built for certain areas under the condition of good GNSS status. A representative routine showing the basic steps for the sub-maps’ retrieval technique is depicted in FIG.8. [00156] To manage the dynamic loading and unloading of the sub-maps, the system utilizes a combination of the current location, view range, and possibly anticipated movement patterns. As the user navigates, the system continuously calculates which group of sub-maps are needed to be used. Sub-maps that fall within the user's current and near-future area of interest are loaded into memory, while sub-maps that are no longer needed are unloaded, freeing up resources. There is a branch in the routine depending on whether an online map has already been built for the current area or not. In case an online map has already been built, the system will manage to load and unload the sub-maps to optimize the memory usage. On the other hand, the system will build a sub-map online for the current area if it is not available. [00157] In addition, an efficient retrieval system would employ caching strategies. Frequently accessed sub-maps might be kept in a cache for quicker access, reducing the need for constant loading and unloading from the main storage. Additionally, predictive algorithms can be used to pre-load sub-maps that the navigation system is likely to need soon, based on their current direction and speed of movement. The retrieval system should operate on a smart, demand-based loading mechanism, REFERENCE NO: TPI-079PCT US PATENT APPLICATION ensuring that only the necessary parts of the map are in memory at any time, thereby optimizing performance and memory usage. This system is decisive for handling large- scale map data efficiently. [00158] An AI-driven map retrieval technique can significantly enhance the loading and unloading of sub-maps based on the user's current location and context. Such an approach leans on predictive loading strategies, where machine learning models are trained to anticipate the user's movement and behavior patterns within a virtual environment, thus pre-loading adjacent sub-maps that are likely to be accessed next. [00159] To optimize the efficiency of loading and unloading operations, the retrieval technique could incorporate some concepts to minimize the computational overhead associated with these operations. Spatial indexing technique is one potential advanced concept that can enhance the retrieval technique efficiency by managing and accessing the map parts or sub-maps. The system may employ spatial indexing techniques like Quadtree or R-trees. These indexing methods allow the system to quickly locate and retrieve the relevant sub-maps based on the user's current position and view. Another optional concept is Level of Detail (LOD) management, in which the system can store multiple versions of each sub-map at varying resolutions in the system using different zoom levels. As the user zooms in or out, the system loads the appropriate LOD, ensuring that the map remains clear and informative without overloading the memory with unnecessary detail. Further, predictive loading algorithms may be used to determine which group of sub-maps the user might need next, which can greatly enhance the responsiveness of the system. These algorithms can analyze the user's current direction, speed, and typical usage patterns to preload sub-maps just beyond the current view. Still further, an intelligent cache management strategy may be used for reducing load times and bandwidth usage. Frequently used sub-maps can be stored in a local cache. The system should also implement a cache eviction policy, determining which sub-maps to keep and which to discard based on usage patterns. Yet another concept that can be applied for global, preexisting maps is network optimization, which may include using data compression techniques to reduce the size of the sub-maps transmitted and implementing a robust error-handling mechanism to manage network failures or slow connections. Moreover, adaptive quality adjustments may be applied in scenarios with limited memory or bandwidth, so that the system can REFERENCE NO: TPI-079PCT US PATENT APPLICATION dynamically adjust the quality of the sub-maps. For instance, in a low-memory situation, the system could load lower-resolution sub-maps to conserve resources. By integrating these advanced aspects, the retrieval system for swapping map parts becomes highly efficient and adaptable, capable of providing a smooth and interactive experience across a range of applications and use cases. 2.5.5 Example System [00160] Accordingly, FIG.9 shows a system architecture that incorporates the above map retrieval techniques. As indicated, the navigation system uses any available data from the basic sensors such as IMU, GNSS, Odometer, and Barometer. The data from the perception sensors are used to update the system. Perception data can provide an update during the challenging area when manipulated and processed by the navigation system. The online map built during this process may be used as a reference map together with the local map. The retrieval system helps with improving and optimizing the memory management for the system. The navigation filter update could happen using 2D maps or 3D maps during the process for the navigation states estimation. The whole map that covered the area where the system is moving along is divided into sub-maps to be manageable in terms of size and number of features to be processed. 2.5.6 Depth Estimation From Optical Samples [00161] One exemplary system architecture for implementing the techniques of this disclosure using optical samples is schematically depicted in FIG.10. As shown, input information comes from the at least one optical sensor, such as perception sensor 112 or perception sensor 222 in the embodiments discussed above. In this architecture, different methods may be employed for determining depth information for objects within the optical samples. The upper branch shown in FIG. 10 represents estimating depth to provide depth data for objects detected within the optical samples. From the depth estimation, range, bearing and elevation for the objects may be extracted and fed to the nonlinear state estimation technique. In some embodiments, the depth information can be estimated using neural network techniques as discussed below. Alternatively or in addition, the middle branch in FIG.10 can be employed when depth readings are directly available from the at least one optical sensor, such as when using REFERENCE NO: TPI-079PCT US PATENT APPLICATION stereo sensors or when a stream of samples are available. Similarly, range, bearing and elevation for objects within the samples may be extracted and fed to the nonlinear state estimation technique. Neural network techniques can also be applied. The lower branch involves reconstructing a scene representing a local environment surrounding the platform based on information from the at least one optical sensor, including the depth information discussed above. Then, the reconstructed local map can be compared to the global map of the obtained map information and the correlations fed to the nonlinear state estimation technique. Once more, neural network techniques can be employed during scene reconstruction. To help illustrate, two examples of scene reconstruction are shown in FIGs. 11 and 12, with the top view in each depicting the respective optical sensor samples and the bottom view depicting the reconstructed scenes. [00162] Returning to FIG.10, the nonlinear state estimation technique also employs reference map information, such as the online map built as discussed throughout this disclosure, and motion sensor data for the platform. In some embodiments, odometry information and/or absolute navigational information can also be fed to the nonlinear state estimation technique if available. These additional sources of information are optional as indicated by the dashed boxes. Based on the tight integration of this input information, the nonlinear state estimation technique can then provide an integrated navigation solution for the platform or a revised integrated navigation solution, as indicated by the outputs of position, velocity and/or attitude. [00163] As noted above, the techniques of this disclosure can benefit from the use of neural networks when processing the optical samples. In particular, neural networks and deep learning may help mitigate some of the drawbacks of other depth- from-vision techniques. For example, the depth information can be learned from stereo images or a stream of images from a monocular camera, so that neural network and deep learning is used to estimate the depth in real-time using either the same sensor used during training or a different one. As an illustration, a stereo optical sensor may be used during training and a monocular optical sensor may be used in real-time. As used herein, the term “deep neural network” refers to a neural network that has multiple layers between the input and output layers. One suitable deep neural network is a convolutional neural network (CNN) as schematically represented in FIG.13. The alternating operations of using convolutions to produce feature maps and reducing the REFERENCE NO: TPI-079PCT US PATENT APPLICATION dimensionality of the feature maps with subsamples leads to a fully connected output that provides the classification of the object. The depth of the neural network is governed by the number of filters used in the convolution operations, such that increasing the number of filters generally increases the number of features that can be extracted from the optical sample. Another suitable deep neural network is a recurrent neural network (RNN) as schematically represented in FIG.14. The left side of the figure shows the input, x, progressing through the hidden state, h, to provide the output, o. U, V, W are the weights of the network. The connections between nodes form a directed graph along a temporal sequence as indicated by the unfolded operations on the right side of the figure. 3 System Integration [00164] From the above discussion, it will be appreciated that embodiments of the state estimation techniques of this disclosure may employ a reference map, such as the online map that is built, and a local map for its operation. The reference map, whether an online semi-global map or a pre-existing global map, may be a group of sub-maps from the large map that covers the whole area. If the online (semi-global) map is available, it can be considered as the reference map. In case the online map is large enough to warrant usage of the storage and retrieval operations discussed above, it can be divided into sub-maps and one of the sub-maps will be used as the reference map for the navigation filter. The selection of the portion of the reference map to be passed to the navigation filter is based on the current location. Reference maps can be built in real-time during the navigation session or offline. On the other hand, the local map is defined as the map generated from the current detections form the perception sensors based on the current navigation states. [00165] Navigation filter measurement model used in the state estimation technique can use different maps from different perception resources, whether 2D or 3D maps. The 2D maps can be obtained from the 3D maps using the projection technique. The navigation filter can work with different combination of map’s sources. The filter can use the map built from same perception sensors or a combination from different perception sensors. In one case, the navigation filter can use 2D/3D online map from radar while using 2D/3D local map from radar or vision sensors. As another possibility, the navigation filter can use 2D/3D online map from vision while using 2D/3D local REFERENCE NO: TPI-079PCT US PATENT APPLICATION map from radar or vision sensors. The local map size depends on the number of detections returned from the perception sensor per scan. [00166] The navigation system in this work has benefits over other techniques that use maps. The system has benefits over perception-based odometry solution (for example, Visual Odometery (VO) or Radar Odometery (RO)) or perception-based inertial odometry solution (for example, Visual Inertial Odometery (VIO)). The system may provide an absolute position for update while the other mentioned methods provide a relative information. [00167] Another characteristic of the system as discussed above is the ability to decide to create or build the surrounding map based on favorable conditions being satisfied. For example, if the system is utilized in an area with good environment conditions such as good GNSS signal, it may not need to build a map for the environment. [00168] Moreover, the system can work without a pre-built map. The system can build the online map for the surrounding area during the navigation session. The system can provide the online map alongside with the local map to the navigation algorithm to provide the navigation solution. 3.1 Measurement Model Embodiments [00169] As noted above, a nonlinear measurement model of the perception samples is used to directly update the nonlinear state estimation technique used to provide the integrated navigation solution. As one example, the perception sensor data comprises information for a given sample covering the field of view of the perception sensor. A measurement vector may be denoted by the set ^^ = {^ ^ ^ ^ , ^^ … … … , ^^ ^ ^^ } , where ^ is the total number of measurements per sample. In a 2D perception sensor system, measurement is usually along the azimuth angle. Therefore, ^^ ^ represents the measured range at ^^^ bearing angle. In a 3D perception sensor system, the measurement vector can be represented by the 2D list ^^ = ^^(^^,^^) ^ , … … … , ^(^^ ^ ^,^^ ^ ^) ^ ^, where ^ and ^ represent the number of scanning bins in the azimuth and elevation angle respectively. The Markov property is there is no dependence REFERENCE NO: TPI-079PCT US PATENT APPLICATION between the error in measurements over time. The aim is to model the probability of a measurement denoted by ^^ , given the knowledge of the map ^, and the state of the vehicle at time t denoted by ^^. It may be assumed that the measurements at different angles from one sample are independent (the error in distance range at one angle is independent of the error in distance range at another angle). The following discussion is in the context of a 2D perception system, however, the 3D perception model can be extended from the 2D perception measurement model easily. The probability of the measurement vector ^^ , given ^ and ^^, may be represented as ^(^ | ^ , ^) = ∏ ^ ^^ ^^^ ^ ^ ^ ^^ ^ ^ ^ ^^ , ^^ . assumption between measurements, the probability of a sample is represented as the multiplication between the probability of each individual measurement given the knowledge of the map and the state of the vehicle. These assumptions are used to simplify the modelling process. Four different perception measurement models using different modelling techniques are detailed below, but other models may be employed as desired. 3.1.1 Range-based Model Embodiments [00171] One suitable perception measurement model is a range-based measurement model. Based on the true range between a static perception sensor and a static target, the model describes the error distribution of the depth data from the estimated depth or the depth readings for this range (given that multiple measurements of the same static target are available), as expressed by the probability density function: ^^^ ^ ^ ^^^ , ^^. In other words, given the knowledge of the map (whether feature-based or location map) and an estimate of the state of the platform, what is the probability of the measured range. Here, ^ refers to the ^^^ range at a certain (azimuth/elevation) angle. Deriving ^(^^ |^^ , ^) from ^^^ ^ ^ ^^^ , ^^ is a matter of multiplication of probabilities of all ranges. 3.1.1.1 Ray Casting [00172] For range-based modelling, the true range to the target in the map (this may include a feature-based map, location-based map or both) may be computed based REFERENCE NO: TPI-079PCT US PATENT APPLICATION on the estimate of the state of the platform. This is done by using ray casting (or ray tracing) algorithms, denoting the true range by ^ ^,^^^^ ^ . To estimate the range between the vehicle and the target in a map, a ray may be simulated to move in a straight line until it either hits a target in the map or exceeds a certain distance. The ray’s direction in 3D is based on the reported state of the platform (which may include position and orientation and may also be termed “pose”) and the bearing of this specific measurement. A conversion between the perception sensor coordinate frame and the vehicle coordinate frame may establish the starting point and direction of the ray relative to the state of the vehicle. Using this technique, the true range ^ ^,^^^^ ^ from the perception sensor to the target may be found. It is important to note that a target must be in the map for this operation to make sense. If the target is not in the map (i.e., another moving vehicle or a pedestrian), it should be detected before the ray casting algorithm is called. For now, only static targets will be considered by the ray casting model. [00173] A schematic depiction of an architecture using a range-based perception model to estimate the probability of the current state of the platform is shown in FIG. 15. Here, particle state refers to the state of the vehicle at time ^. The input to the system is the perception range measurements along with their respective bearing and elevation angles, such as from depth data from estimated depth or the depth readings. Given the state of the platform and the reference map, such as the online map built according to the techniques of this disclosure, ray casting may be used to estimate ^(^^ |^^ , ^). Finally, to estimate the probability of the current state denoted by ^^^(^^), the belief in the current state is proportional to the probability of the measurement given the state ^^ and the map, times the prior probability of the previous state denoted by ^^^(^^^^ ). 3.1.1.2 Error Modeling Embodiments [00174] Given the estimation of the true range to the target using an initial estimate of the platform’s state and a given map, various factors may affect the measurement error of the perception sensor. The measurement model of this disclosure is configured to handle errors, such as by being adaptive to help compensate for these errors or otherwise intelligently cope with the existence of errors so that the measurement model can still provide good results and operate well despite the errors. The sources of range errors may be separated into three categories; the first source of errors is environmental factors affecting the perception sensor, the second source of REFERENCE NO: TPI-079PCT US PATENT APPLICATION errors are inherent in the sensor itself and the third source of errors is related to the dynamics of the vehicle relative to the target. [00175] Regardless of the source of error, the measurement error due to a specific error source may be modeled as a Gaussian distribution with mean ^ ^,^^^^ ^ and standard deviation denoted by ^^^^. Also, the distribution between the minimum range denoted by ^^^^ and the maximum range of the perception sensor denoted by ^^^^ (i.e., an perception sensor can measure within a limited range) may be limited. Hence, the probability distribution of the perception measurement model can be modelled as for the range [^^^^ , ^^^^ ] and zero otherwise, where the numerator refers to a normally distributed random variable with mean ^ ^,^^^^ ^ and standard deviation ^^^^: ^ ^ ^,^^^^ ^ ^^ ^ ^ ^^ ^ ^ ^ ^ ^ ^ [00176] probability density function ^(^^ |^^ , ^), which is computed by integrating the numerator (i.e., the normal distribution) within the perception sensor’s coverage range [^^^^ , ^^^^ ], leaving the only missing parameter to define ^^^^ [00177] Different sources of error have different effects on the variance of the measurement model. Three sources of error identified above may be considered when building a probabilistic measurement model. The first source of errors is environmental factors including weather conditions like rain, fog and snow. The second source of error is inherent in the design of the perception sensor itself. This error can be modelled using ^^^^. The final source of error is related to the dynamics of the perception sensor relative to a static or dynamic target. Generally, these errors reflect the effect of position, speed and direction of motion of the target on the range estimation accuracy. For example, some perception sensors may exhibit greater aberrations at the periphery of the field of view. Moreover, the range estimation accuracy might also be affected by the speed of the vehicle. Hence, these errors can also be modelled using the standard deviation of our adaptive model. It is also worth noting that the speed of the target REFERENCE NO: TPI-079PCT US PATENT APPLICATION relative to the perception sensor has the same effect on the bearing estimation. [00178] Taking the above factors into consideration, one suitable Adaptive Measurement Model (AMM) is schematically depicted in FIG. 16. As shown, each block on the right-hand side represents the three different factors that might affect the variance of the measurement model, namely; Environmental factors, Perception Sensor Design factors and (platform) Dynamics factors. For example, a non-linearity of the perception sensor design affecting the range estimation has a variance of ^^ ^ ^^^^^^ . As an example, in raining environmental conditions, an error would be to the range measurement denoted by ^^ ^ ^^^ . These variances are sent to the Combiner as indicated, with the assumption that both errors are independent random variables. Hence, the measured range can be represented as ^ ^ ^,^^^^ ^ = ^^ + ^^^^^^^^^^ + ^^^^^^^ , where ^^^^^^^^^^ ~ ^ (µ^^^^^^^ , ^ ^ ^^^^^^^ ) ^^^^^^^ ~ ^ (µ^^^^ , ^^ ^ ^^^ ) represented as: ^ ^ ^ ~ ^ (µ ^ ^^^^^^^ + µ^^^^ , ^^^^^^^^ + ^^ ^ ^^^ ) To generalize, for distribution of perception measurement is given by ^ ^ ^ ~ ^ (∑ ^^^ ^^ ^ µ^ , ∑^^^ ^ ^^ ^ ^^ ), with the Combiner block estimating the resulting measurement model based on the availability of error sources. 3.1.1.3 Parameter Estimation Embodiments [00179] Notably, building the AMM model may involve identifying the mean and variance of each source of error. Once these parameters are estimated, an approximate PDF for the current measurement may be built. To so do, either a field expert’s intuitions or design experiments may be used to collect data depending on the source of error and then attempt to find the best normal distribution that would fit the collected data. Once the fittest distribution is found, the mean and variance of the error source under investigation may be extracted. This mean and variance can then be saved for us to use when the same conditions road conditions apply. 3.1.2 Nearest Object Likelihood Model Embodiments REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00180] Another suitable perception measurement model is a nearest object likelihood (NOL) measurement model that features reduced computational complexity as compared to the ray casting for each possible state used in the range-based measurement model. Under this approach, the perception sensor position is first converted from the local frame to the global frame. Assuming the perception sensor is positioned at ^^ ^ ^^^ , ^ ^ ^^^ , ^^ ^ ^^ ^ relative to the platform’s local frame, and the current state of the platform in the global frame is denoted by (^^, ^^, ^^), then the position of ^^^^,^ ^^ the perception sensor in the global frame can be represented ^ ^ ^ ^ ^ ^^^ ^ ^ ^^^ . . The rotational matrix ^ ^ ^ is the rotation from local to the global frame ^ ^ ^^^ represented by ^ ^ ^ = ) ) ^ [00181] The perception sensor position in the global frame may be denoted ^^^^^,^ , ^^^^,^ , ^^^^,^^, which can be correlated to a position in the map information. The next step is to project the position of the perception sensor in the global frame based on the current measurement ^^ ^ . By doing so, the position of the object that was detected by the perception sensor resulting in the current measurement is estimated in the global frame. Assuming that the perception measurement ^^ ^ is measured at an azimuth and elevation angle of ^^ ^ ^^,^ and ^^ ^ ^^,^ respectively, the 3D projection of the measurement vector ^^ ^ in the map can be represented by ^ ^ ^^^,^ ^^^^,^ ^^ ^ cos (^^ ^ ^^,^ )cos (^^ ^ ^^,^ ) ^ ^ ^ ^^^,^ ^ = ^ ^^^^,^ ^ + ^ ^ ^ ^ ^^ ^ cos (^^ ^ ^^,^ )sin (^^ ^ ^^,^ ) ^ ^ ^ ^ ^ ^ [00182] denoted by REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^^ ^ ^^^,^ , ^ ^ ^^^,^ , ^^ ^ ^^,^ ^ may be searched. Here, it is assumed that the likelihood ^^^ ^ ^ , ^^ is equal to a Gaussian distribution with zero mean and variance ^^ ^ ^^^ the error in Euclidian distance between the projected position and the nearest object in the map. Hence, the probability of a set of measurements given the state of the vehicle and a map, can be represented as ^(^^ |^^ , ^) ^ ∏ ^ ^^ ^ ^^ ^(^ = ^^^^ ^) , where ^ ~ ^ (0, ^^ ^ ^^^ ) and ^^^^^ is the distance between the projected position of measurement ^ and the nearest object in the map. [00183] One suitable architecture for estimating the probability of a platform state based on an perception NOL measurement model is schematically depicted in FIG. 17 to estimate belief in the current state of the vehicle. Here, particle state refers to the pose of the vehicle at time ^. The input to the system is the perception range measurements along with their respective bearing and elevation angles. Given the state of the vehicle and the reference map, such as the online map built according to the techniques of this disclosure, the perception sensor’s pose is projected in 3D space for each measurement. Based on the distance to the nearest objects from each projection, ^(^^ |^^ , ^) is estimated. The probability of the current state denoted by ^^^(^^) is correspondingly proportional to the probability of the measurement given the state ^^ and the map, times the prior probability of the previous state denoted by ^^^(^^^^ ). As will be appreciated, the error compensation techniques discussed above for a range- based measurement model, such as those discussed in Section 3.1.1.2, may also be applied to the NOL measurement model. 3.1.3 Map Matching-based Model Embodiments [00184] Yet another suitable perception measurement model is a map matching model that features the capability of considering objects that are not detected by the perception sensor by matching between a local map and a reference map, such as a conventional global map or an online map built according to the techniques of this disclosure, when assessing the likelihood of a sample given knowledge of the platform state and the map. The local map is defined as a map created based on perception sensor samples, such as through scene reconstruction as discussed above. The reference map REFERENCE NO: TPI-079PCT US PATENT APPLICATION can either be a feature-based or location-based map as discussed above. As one example, assume a grid-map of the environment encompassing the platform is denoted by ^^^ and the measurement ^^ is converted into a local grid-map denoted by ^^^^. The local map must be defined in the global coordinate frame using the rotational matrix ^ ^ ^ . The representation of what is in the grid cells of both the reference and local map may reflect whether an object exists in this cell (Occupancy Grid Map (OGM)). [00185] Assuming a 3D map, the center of a grid cell in the reference and local map can be denoted by ^ ^^ and ^^ ^^ ,^ ^ ,^ respectively. The linear correlation can be used to indicate the of the current local map matching the reference map, given the current state of the platform. By denoting to the mean of the relevant section of the reference map by ^^^ , the mean of the local map by ^^^^, and the standard deviation of the reference and local map as ^^^^ and ^^^^^ respectively, the correlation coefficient between the local and reference map is then represented by: ^(^^^^ |^^ , ^) = ^^^^^^^ ,^^^^ ∑ ^^^ ^^ ^,^,^ ^^^^,^,^ ^^ ^^^^ ^^^,^, ^^^^^ = ^ ^ The correlation may be assumed to be equal to 0 when only the existence of positive correlation or no correlation at all is relevant, allowing the likelihood of the measurement to be represented as ^(^^ |^^ , ^) ^ max ( ^^^^^^^ ,^^^^ , 0 ). [00186] A schematic illustration of one possible architecture for a perception map matching measurement model that may be used to estimate belief in the current state of the platform is depicted in FIG. 18. Here, particle state refers to the state of the platform at time ^. The input to the system is the perception range measurements along with their respective bearing and elevation angles. Given the state of the vehicle and the perception sensor data, a local map (e.g., a 2D or 3D occupancy grid) denoted by ^^^^ may be built. Then, using the same representation of the reference map denoted by ^^^, such as the online map built according to the techniques of this disclosure, the correlation between the reference map and the local map may be computed and used as an indicator for ^(^^ |^^ , ^). Finally, to estimate the probability of the current state denoted by ^^^(^^), the belief in the current state is proportional to the probability of REFERENCE NO: TPI-079PCT US PATENT APPLICATION the measurement given the state ^^ and the map, times the prior probability of the previous state denoted by ^^^ ). As will be appreciated, the error compensation techniques discussed a range-based measurement model, such as those discussed in Section 3.1.1.2, may also be applied to the map matching measurement model. 3.1.4 Closed-Form Model Embodiments [00187] A further example of perception measurement models that may be used in the techniques of this disclosure are stochastic, closed-form models that relate the perception measured ranges to the true ranges as a function of the states of the platform as compared to the probabilistic approaches discussed above. The closed-form perception measurement models do not include a deterministic relation between the measured range and the true range. To provide a relation between the state of the platform and the measured ranges, a first correspondence/association is determined by assuming that objects detected by the perception sensor are identified uniquely as objects in the map. Given an estimate of the range to an object and knowing which object is detected by the perception sensor in the map provides the correspondence. There are several approaches to solving the correspondence problem, for example, if the map was represented as a feature-map containing a set of objects, every object has a unique signature with respect to the perception sensor, and hence the object detected can be inferred by comparing the perception sensor signature to the object signature, and if they match, then it may be assumed the object detected by the perception sensor is the object that maximizes the correlation between the perception signature and a specific map object. If signatures from several objects leads to the same (or close to) correlation vector, the search area can be limited to a smaller cone centered around the reported azimuth and elevation angle of the perception sensor. Other approaches to solving the correspondence problem include using the perception sensor to aid in classifying the type of object (i.e., speed limit road sign versus a traffic sign) detected and thus limiting the search to objects of the same kind and in the platform’s vicinity. As will be appreciated, the error compensation techniques discussed above for a range-based measurement model, such as those discussed in Section 3.1.1.2, may also be applied to the closed form measurement models. [00188] Assuming that a single perception measurement can be represented by REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^ ^ = {^ ^ ^ , ^ ^ ^^^,^ , ^^ ^ ^^,^ }, where ^^ ^ is the measured range to an object (with known correspondence) in the map, positioned in the global frame at {^ ^ ^ ^^^ , ^^^^ , ^^ ^ ^^ }, and ^ ^ are the azimuth and elevation angles of the main lobe relative to the centerline, the global position of the perception sensor can be represented as: ^^^^,^ ^^ ^ ^ ^^^ ^ ^^^^,^ ^ = ^ ^ ^ ^ ^ + ^^ ^ ^ ^ ^^^ ^ ^^^^,^ ^^ ^ ^ ^^^ The relationship between the measurement denoted by ^^ ^ and the states of the vehicle can be expressed by the following set of equations: ^ ^ ^ = ^ ^,^^^^ ^ + ^^^^^ ^^^ ) respectively. Moreover, ^ ^,^^^^ ^ is the error free range to the detected object and ^^, ^^ angles are the azimuth and pitch of the vehicle at time ^. [00189] One embodiment of a closed-form perception measurement model is schematically depicted in FIG.19. A first correspondence between the objects detected and classified by the perception sensor and objects in the reference map, such as the online map built according to the techniques of this disclosure, is required. Resolving the perception/map correspondence leads to knowing the position of objects in the global frame (absolute position). Correspondingly, the absolute positioning of the objects and their ranges may be used to build a closed form measurement model as a function of the states of the platform. [00190] Another embodiment of a closed-form measurement model that employs REFERENCE NO: TPI-079PCT US PATENT APPLICATION information from radar and another type of perception sensor is schematically depicted in 20. Suitable types of perception sensors include an optical camera, a thermal camera and an infra-red imaging sensor. Images or other samples from the perception sensors may be used to detect and classify objects. A first correspondence is then determined by associating the ranges from the radar with the classified objects. Next, a second correspondence is determined between the objects detected and classified by the perception sensor (labelled with ranges) and objects in the reference map, such as the online map built according to the techniques of this disclosure. Resolving the camera/map correspondence leads to knowing the position of objects in the global frame (absolute position). As such, the absolute positioning of the objects and their ranges may be used to build a closed form measurement model as a function of the states of the platform. 3.2. System Model Embodiments [00191] As discussed above, another aspect of the techniques of this disclosure is the use of a state estimation technique to provide the navigation solution that integrates perception sensor data with the motion sensor data. The following materials discuss exemplary nonlinear system models as well as using another integrated navigation solution through another state estimation technique. In one embodiment, a nonlinear error-state model can be used to predict the error-states and then use the error-states to correct the actual states of the vehicle. Alternatively, in some embodiments, a linearized error-state model may be used. In another embodiment, a nonlinear total-state model can be used to directly estimate the states of the vehicle, including the 3D position, velocity and attitude angles. In yet another embodiment, the solution from another state estimation technique (another filter) that integrates INS and GNSS (or other source absolute navigational information) systems to feed the system model for the state estimation technique at hand. 3.2.1 Nonlinear Error-State Model Embodiments [00192] In the present example, a three-dimensional (3D) navigation solution is provided by calculating 3D position, velocity and attitude of a moving platform. The relative navigational information includes motion sensor data obtained from MEMS- based inertial sensors consisting of three orthogonal accelerometers and three REFERENCE NO: TPI-079PCT US PATENT APPLICATION orthogonal gyroscopes, such as sensor assembly 106 of device 100 in FIG.1. Likewise, host processor 102 may implement integration module 114 to integrate the information using a nonlinear state estimation technique, such as for example, Mixture PF having the system model defined herein below. Navigation Solution [00193] The state of device 100 whether tethered or non-tethered to the moving platform is x ^^ ^^ , ^ k , E N U k k k k k k k ^ T k k h , v , v , v , p , r , A ^ , where ^ k is the latitude of the E vehicle, ^ k is the longitude,h k is the altitude, vk is the velocity along East direction, N is the velocity along North direction, k is the velocity along Up vertical direction, p k is the pitch angle, r k is the roll angle, and A k is the azimuth angle. Since this is an error-state approach, the motion model is used externally in what is called inertial mechanization, which is a nonlinear model as mentioned earlier, the output of this model is the navigation states of the device, such as position, velocity, and attitude. The state estimation or filtering technique estimates the errors in the navigation states obtained by the mechanization, so the estimated state vector by this state estimation or filtering technique is for the error states, and the system model is an error-state system model which transition the previous error-state to the current error-state. The mechanization output is corrected for these estimated errors to provide the corrected navigation states, such as corrected position, velocity and attitude. The estimated error- state is about a nominal value which is the mechanization output, the mechanization can operate either unaided in an open loop mode, or can receive feedback from the corrected states, this case is called closed-loop mode. The error-state system model commonly used is a linearized model (to be used with KF-based solutions), but the work in this example uses a nonlinear error-state model to avoid the linearization and approximation. [00194] The motion model used in the mechanization is given by ^^ = ^^^^^ (^^^^, ^^^^) where ^^^^is the control input which is the inertial sensors readings that correspond to transforming the state from time epoch k ^ 1 to time epoch k , this will be the convention used in this explanation for the sensor readings for nomenclature purposes. REFERENCE NO: TPI-079PCT US PATENT APPLICATION The nonlinear error-state system model (also called state transition model) is given by ^ where ^^ is the process past and present states and accounts for the uncertainty in the platform motion and the control inputs. The measurement model is ^^^ = (^^^, ^^) where ^^ is the measurement noise which is independent of the past and current states and the process noise and accounts for uncertainty in the perception sensor data. [00195] A set of common reference frames is used in this example for demonstration purposes, other definitions of reference frames may be used. The body frame of the platform has the X-axis along the transversal direction, Y-axis along the forward longitudinal direction, and Z-axis along the vertical direction of the vehicle completing a right-handed system. The local-level frame is the ENU frame that has axes along East, North, and vertical (Up) directions. The inertial frame is Earth-centered inertial frame (ECI) centered at the center of mass of the Earth and whose the Z-axis is the axis of rotation of the Earth. The Earth-centered Earth-fixed (ECEF) frame has the same origin and z-axis as the ECI frame but it rotates with the Earth (hence the name Earth-fixed). Mechanization [00196] Mechanization is a process of converting the output of inertial sensors into position, velocity and attitude information. Mechanization is a recursive process which processes the data based on previous output (or some initial values) and the new measurement from the inertial sensors. The rotation matrix that transforms from the vehicle body frame to the local-level frame at time k ^ 1 is ^ cos A k ^ 1 cos r k ^ 1 ^ sin A k ^ 1 sin p k ^ 1 sin r k ^ 1 sin k ^ 1 cos p k ^ 1 cos A k ^ 1 sin r k ^ 1 ^ sin A k ^ 1 sin p k ^ 1 cos r k ^ 1 ^ R ^ ^ ^ ^ sin A cos r ^ cos A sin p sin r cos A c ^ b , k ^ 1 ^ k ^ 1 k ^ 1 k ^ 1 k ^ 1 k ^ 1 k ^ 1 os p k ^ 1 ^ sin A k ^ 1 sin r k ^ 1 ^ cos A k ^ 1 sin p k ^ 1 cos r k ^ 1 ^ ^ ^ ^ ^ and the mechanization version is REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00197] To describe the mechanization nonlinear equations, which is here the motion model for the navigation states, the control inputs are first introduced. The measurement provided by the IMU is the control input; ^ = ^^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^^ ^^^ ^^^ ^^^ ^^^ ^^^ ^^^^ ^ where f x y z x y k ^ 1 , f k ^ 1 , and f k ^ 1 are the readings of the accelerometer triad, and ^k ^ 1 , ^k ^ 1 , nd ^ z a k ^ 1 are the readings of the gyroscope triad. As mentioned earlier these are the readings that correspond to transforming the state from time epoch k ^ 1 to time epoch k , this is the convention used in this explanation for the sensor readings just used for nomenclature purposes. Initialization [00198] Initialization procedures may be tailored to the specific application. First, the initialization of position and velocity will be discussed. In some applications, position may be initialized using a platform’s last known position before it started to move, this may be used in applications where the platform does not move when the navigation system is not running. For the systems where inertial sensors are integrated with absolute navigational information, such as for example GNSS, initial position may be provided by the absolute navigation system. In some cases, the starting point may be known a priori (pre-surveyed location) which can be used as an initial input. Velocity initialization may be made with zero input, if the platform is stationary. If moving, the velocity may be provided from an external navigation source such as for example, GNSS or odometer. [00199] For attitude initialization, when the device is stationary, accelerometers measure the components of reaction to gravity because of the pitch and roll angles (tilt from horizontal plane). The accelerometers measurement is given by: REFERENCE NO: TPI-079PCT US PATENT APPLICATION r ^ ^ ^ ^ ^ where g is the the X, Y, and Z directions are utilized, the pitch and the roll angles can be calculated as follows: ^ ^ ^ ^ ^ ^ [00200] In case the averaging can be used on the accelerometer readings to suppress the motion components, then the above formulas may be used with the averaged accelerometers data to initialize pitch and roll. The length of the time frame used for averaging may depend on the application or the mode where the navigation system works (such as for example walking or driving). [00201] In case the device is not stationary and either GNSS or the optional source of speed or velocity readings (such as for example odometer) is available and in case initial misalignment is resolved (such as estimating pitch misalignment with absolute velocity updates as described below) or there is no misalignment in a certain application, then the motion obtained from these sources may be decoupled from the accelerometer readings, so that the remaining quantity measured by these accelerometers are components of gravity, which enables the calculation of pitch and roll as follows: ^ ^ p ^ tan ^ 1^ f y ^ Acc ^ ^ 2 2 ^ ^ ^ ^ where the speed and navigational information are REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00202] For the azimuth angle, one possible way of initializing it is to obtain it from the absolute navigational information. In the case velocities are available and the device starts to move, the azimuth angle can be calculated as follows: ^ E ^ ^ ^ This initial azimuth is the moving and it may be used together with the initial misalignment (such as estimating pitch misalignment with absolute velocity updates as described below) to get the initial device heading. [00203] If velocity is not available from the absolute navigation receiver, then position differences over time may be used to approximate velocity and consequently calculate azimuth. In some applications, azimuth may be initialized using a platform’s last known azimuth before it started to move, this may be used in applications where the platform does not move when the navigation system is not running. Attitude Equations [00204] One suitable technique for calculating the attitude angles is to use quaternions through the following equations. The relation between the vector of quaternion parameters and the rotation matrix from body frame to local-level frame is as follows: 0.25^^ ℓ,^^^^(3, ℓ,^^^^ ^ ^ ^ ^,^^^ 2) ^^,^^^ (2,3)^/^^^^ ^^^ 0.2 ℓ,^^^^ ℓ,^^^^ ^ ^ 5^ The as well as the The latter two are monitored by the gyroscope and form a part of the readings, so they have to be removed in order to get the actual turn. These angular rates are assumed to be constant in the interval between time steps k-1 and k and they are integrated over time to give the angular increments corresponding to the rotation vector from the local-level frame to the body frame depicted in the body frame, as follows: REFERENCE NO: TPI-079PCT US PATENT APPLICATION t , where ^ e is the Earth rotation rate, R M is the Meridian radius of curvature of the Earth’s reference ellipsoid and is given by a ^ 1 ^ e 2 ^ 3 ,R N is the normal radius ellipsoid and is given by Mech a R N , k ^ 1 ^ 1 , ^ t is the sampling time, a radius) of a Meridian of the Earth’s ellipsoid a = 6,378,137.0 m , e is the eccentricity a 2 ^ 2 e ^ b ^ f (2 ^ f ) 2 ^ 0.08181919 , b is the semiminor axis of a Meridian ellipse = a (1- f ) = 6356752.3142 m , and f is the flatness f = a- b = 0.00335281 a . The quaternion parameters with time as follows: ^ ^ ^ ^ 0 ^ ^^ ^ ^ ^ ^ ^^^ ^ ^ ^^^ ^ ^ ^ ^ 1 ^^ 0 ^^ ^ ^ ^ ^ = ^ = ^^^ ^ ^^^ ^ ^ ^ + ^ The definition Due to computational errors, the above be violated. To compensate for this, the following special If the following error exists after the computation of the quaternion parameters REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^1 ^ 1 2 2 ^ ^q k ^ ^ ^ ^ then the vector of quaternion parameters should be updated as follows: ^ ^ ^ [00205] The new rotation frame to local-level frame is computed as follows: ^ R ^ , Mech ^ 1,1 ^ R ^ , Mech ^ 1, 2 ^ R ^ , Mech ^ 1,3 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ The attitude angles are obtained from this rotation matrix as follows: ^ ^ , M ^ ^ R ech ^ 3, 2 ^ ^ ^ ^ ^ angles calculation employs a skew symmetric matrix of the angle increments corresponding to the rotation vector from the local-level frame to the body frame depicted in the body frame. The skew symmetric matrix may be calculated as follows: REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^ x ^ ^ ^ l T matrix is calculated as follows: , Mech ^ ^ l , Mech ^ b , Mech ^ ^ ^Mech ^ 2 ^ ^ ^2 ^ ^ ^ ^ ^ ^ ^ ^ 2 exponential of a matrix implemented numerically or calculated as follows: ^ sin ^ Mech ^ 2 ^ 1 ^ cos ^ Mech ^ ^ ^ ^ , as mentioned above. Position and Velocity Equations [00207] Next, position and velocity may be calculated according to the following discussion. The velocity is calculated as follows a model, such as for example: REFERENCE NO: TPI-079PCT US PATENT APPLICATION 2 ^ where, the coefficients a 1 through a 6 for Geographic Reference System (GRS) 1980 are defined as: a = 9.78032 2 1 67714 m / s ; a 2 = 0.0052790414; a 3 = 0.0000232718; a =^ 0.0000030876910891/ s 2 ; a 2 4 5= 0.0000000043977311/ s ; a = 0.000000000000 2 6 7211/ ms . For position, one suitable calculation for the latitude may be as follows: d ^ Mech v N , Mech ^ t Similarly, Mech E , Mech ^ Mech ^ ^ Mech ^d ^ ^ ^ ^ Mech ^ v k ^1 ^ t One Mech hMech ^ h Mech ^ dh ^ t ^ h Mech Up , Mech ^ ^ v ^ ^ t [00208] equations for attitude, velocity and position may be implemented differently, such as for example, using a better numerical integration technique for position. Furthermore, coning and sculling may be used to provide more precise mechanization output. System Model [00209] As noted above, the system model is the state transition model and since this is an error state system model, this system model transitions from the error state of the previous iteration k ^ 1 to the error state of the current iteration k . To describe the system model utilized in the present navigation module, which is the nonlinear error- state system model, the error state vector has to be described first. The error state st of the errors in the navigation states, the errors in the rotation matrix R ^ consi b that transforms from the device body frame to the local-level frame, the errors in the sensors readings (i.e. the errors in the control input). The errors in the navigation states are^^^ , ^^ , ^h , ^ v E , ^ v N , ^ v U , ^ p , ^ r , ^ T ^ k k k k k k k k ^ A k ^ , which are the errors in latitude, REFERENCE NO: TPI-079PCT US PATENT APPLICATION longitude, altitude, velocity along East, North, and Up directions, pitch, roll, and y. The errors in R ^ azimuth, respectivel b are the errors in the nine elements of this 3^ 3 matrix, the ^ 3 matrix of the errors will be called ^R ^ 3 b . The errors associated with the different control inputs (the sensors’ errors): ^^^f x y z x y z k k k ^ T k ^ f k ^ f ^^ ^^ ^^ ^ ^f x y z wher k ^f k n ^f k are the stochastic errors in accelerometers readings, and ^^ x k , k , are stochastic errors in gyroscopes readings. Modeling Sensors’ Errors [00210] A system model for the sensors’ errors may be used. For example the traditional model for these sensors’ errors in the literature, is the first order Gauss Markov model, which can be used here, but other models can be used as well. For example, a higher order Auto-Regressive (AR) model to model the drift in each one of the inertial sensors may be used and is demonstrated here. The general equation of the AR model of order p is in the form ^ ^^ = ^ ^^^^^^ + ^^^^ where ^ k is white noise y k is the output of the AR the ^ ’s and ^ are the parameters of the should be noted that such a higher order AR model is difficult to use with KF, despite the fact that it is a linear model. This is because for each inertial sensor error to be modeled the state vector has to be augmented with a number of elements equal to the order of the AR model (which is 120). Consequently, the covariance matrix, and other matrices used by the KF will increase drastically in size (an increase of 120 in rows and an increase of 120 in columns for each inertial sensor), which make this difficult to realize. [00211] In general, if the stochastic gyroscope drift is modeled by any model such as for example Gauss Markov (GM), or AR, in the system model, the state vector REFERENCE NO: TPI-079PCT US PATENT APPLICATION has to be augmented accordingly. The normal way of doing this augmentation will lead to, for example in the case of AR with order 120, the addition of 120 states to the state vector. Since this will introduce a lot of computational overhead and will require an increase in the number of used particles, another approach is used in this work. The flexibility of the models used by PF was exploited together with an approximation that experimentally proved to work well. The state vector in PF is augmented by only one state for the gyroscope drift. So at the k-th iteration, all the values of the gyroscope drift state in the particle population of iteration k-1 will be propagated as usual, but for the other previous drift values from k-120 to k-2, only the mean of the estimated drift will be used and propagated. This implementation makes the use of higher order models possible without adding a lot of computational overhead. The experiments with Mixture PF demonstrated that this approximation is valid. [00212] If 120 states were added to the state vector, i.e. all the previous gyroscope drift states in all the particles of the population of iteration k-120 to k-1 were to be used in the k-th iteration, then the computational overhead would have been very high. Furthermore, when the state vector is large PF computational load is badly affected because a larger number of particles may be used. But this is not the case in this implementation because of the approximation discussed above. Modeling Errors With Rotation Matrix [00213] The errors in the rotation matrix that transforms from the device body frame to the local-level frame may be modeled according to the following discussion. oned earlier R ^ As menti b is the rotation matrix that transforms from the device body me to the local-level frame. The following steps get the error states of the R ^ fra b from all the error states of the previous iteration, therefore this part of the system model gets full error in R ^ the b and not an approximation or a linearization. [00214] First the errors in the Meridian and normal radii of curvature of the Earth’s ellipsoid are calculated from the mechanization-derived version of these radii and the corrected version of these radii as follows: REFERENCE NO: TPI-079PCT US PATENT APPLICATION 2 2 frame to the local-level frame depicted in the local-level frame from the mechanization is given as follows: ^ N , Me ^ ^ ^ v ch k ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ depicted in the body frame of the device from the mechanization is given as follows: ^b , Mech ^R b , Mech l , Mech l , Mech T l , Mech il , k l , k ^ il , k ^ ^ R b , k ^ ^ il , k inertial frame to the ECEF frame depicted in REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^ ^ ^ ^ ^ ^ ^ ^ The error in the rotation vector from the Earth-fixed Earth-centered (ECEF) frame to the local-level frame depicted in the local-level frame is calculated as follows: ^ ^ N N ^ ^ N ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ error level frame depicted in the local-level frame is calculated as follows: ^^ l ^ ^^ l ^ ^^ l il , k ^ ie , k ^ el , k ^ that cause the system to transition from the state at time epoch k ^ 1 to time epoch k will be noted as follows: ^ ^ x k ^ 1 ^ ^ b , Mech ^ y ^ k ^ ^ ^ k ^ 1 ^ ^ ^ error in the gyroscope readings is noted as follows: ^ ^^ x ^ ^^ ^ k b y ^ ib , k ^ ^ ^^ k ^ ^ ^ timate of the corrected R ^ es b matrix, from the previous iteration estimate of the error in this matrix can be obtained as follows: REFERENCE NO: TPI-079PCT US PATENT APPLICATION l l l ^ 1 The corrected rotation vector from the inertial frame to the local-level frame depicted in the local-level frame is calculated as follows: ^ l , Correct ^ ^ l , Mech ^ ^^ l , k The corrected rotation vector from the inertial frame to the local-level frame depicted in the body frame is calculated as follows: ^b , Correct ^R b , Correct ^ l , Correct l , Correct T l , Correct il k l k ^ il k ^ ^ R k ^ ^ ^ il k The error in the rotation vector from the inertial frame to the local-level frame depicted in the body frame is calculated as follows: ^^ b ^ ^ b , Mech ^ ^ b , Correct il , k il , k il , k corresponding to the error in the rotation vector from the local-level frame to the body frame depicted in the body frame is calculated as follows: ^^b b , k ^ ^^ b b l ^ ib , k ^ ^^ il , k ^ ^ t to the rotation vector from the local-level frame to the body frame depicted in the body frame from the mechanization is calculated as follows: ^ b , Mech ^ b , Mech lb , k ^ lb , k ^ t increment corresponding to the corrected rotation vector from the local-level frame to the body frame depicted in the body frame is calculated as follows: ^ b , Correct ^ b , Mech b lb , k ^ lb , k ^ ^^ lb , k REFERENCE NO: TPI-079PCT US PATENT APPLICATION The skew symmetric matrix of the corrected angle increment corresponding to the corrected rotation vector from the local-level frame to the body frame depicted in the body frame is calculated as follows: ^ ^ ^ b , Correct ^ ^ ^ b , Correct ^ ^ ^ ^ ^ ^ ^ The error in the R ^ b matrix is calculated as follows: ^R l ^ R l , Mech ^ ^ R l , Mech ^ ^ R l exp S b , Correct b , k b , k b , k ^ 1 b , k ^ 1 ^ ^ lb , k ^ ^ ^ ^ ^ [00215] Further, the errors in attitude may be obtained as discussed below. First, ^ the corrected R b matrix is calculated as follows: R l , Correct ^ R l , Mech ^^ R l b , k b , k b , k Thus, the corrected attitude ^ pCorrect ^1 ^ R ^ ,, Correct b , k ^ 3, 2 ^ ^ k ^ tan ^ 2 ^ 2 ^ ^ ^ The attitude REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^ Velocity Errors [00216] In order to model the errors in velocity, the discrete version of the derivative of the velocity from mechanization may be computed as follows: ^ ^,^^^^ ^,^^^^ ^ ^^^^ ^ It is to be may be used in all the discrete versions of derivatives in this patent like the above one, and they can be accommodated by the Particle filter. The discrete version of the derivative of the corrected velocity can be calculated as follows: ^ ^,^^^^^^^^^ ^^^^^^ ^ ^ ^ ^ ^ ^^ ^^^ ^ ,^^^^ ^ ^^ ^ ^^^ ^^ ^,^^^^ ^^ ^ ^ ^ ^ ^ ^^ ^ ^ ^^^^ ^ ^ ^^ ^ tem ere ^ p wh k is the temp ^ l , Mech ^ l l , Mech ie, k ^^ el , k ^ ^ ^ ^ il , k ^ ^^ il , k . of position, then ^ g can be calculated as follows: ^ g Mech 4 Mech 2 Mech M k ^ a 1^ 1 ^ a 2 sin ^ ^ k ^ ^ a 3 sin ^ ^ k ^ ^ ^^ a 4 ^ a 5 sin ^ ^ k ^ ^ h k ^ a 6 ^ h ech k ^ ^ ^ 1 ^ 2 ^ ^ Mech ^ ^^ 4 Mec ^ ^ ^ ^ ^ h ^ ^^ ^ ^ ^ REFERENCE NO: TPI-079PCT US PATENT APPLICATION 2 1 The error in the discrete version of the derivative of the velocity can be calculated as follows: ^^_^^^^^ ^ = ^_^^^^^ ^,^^^^ ^ ^,^^^^^^^^^ ^ ^ _^^^^^^ Thus, the error in the velocity can be calculated as follows: ^^ ^ ^ Position Errors [00217] Finally, the errors in position may be modeled using the following equations: ^h ^ ^ h ^ ^ U k k ^ 1 v k ^ t ^ ^ v N , Mech N , Mech N ^ ^ k v k ^ ^ v k k ^ ^^ k ^ 1 ^^ ^ ^ ^ ^ t ^ t 3.2.2 Nonlinear Total-State Model Embodiments [00218] In the present example, a three-dimensional (3D) navigation solution is provided by calculating 3D position, velocity and attitude of a moving platform. The relative navigational information includes motion sensor data obtained from MEMS- based inertial sensors consisting of three orthogonal accelerometers and three orthogonal gyroscopes, such as sensor assembly 106 of device 100 in FIG.1. Likewise, host processor 102 may implement integration module 114 to integrate the information using a nonlinear state estimation technique, such as for example, Mixture PF having the system model defined herein below. Navigation Solution [00219] The state of device 100 whether tethered or non-tethered to the moving REFERENCE NO: TPI-079PCT US PATENT APPLICATION platform is ^^ = [^^, ^ ^ ^ ^ ^, ^, ^^, ^^ , ^^ , ^^, ^^, ^^]^ , ^ k is the latitude of the vehicle, ^ k is the longitude,h k is the altitude,v E k is the velocity along East direction, v N U k is the velocity along North direction, vk is the velocity along Up vertical direction, p k is the pitch angle, r k is the roll angle, and A k is the azimuth angle. Since this is a total-state approach, the system model is the motion model itself, which is a nonlinear model as mentioned earlier, the output of this model is the navigation states of the device, such as position, velocity, and attitude. The state estimation or filtering technique estimates directly the navigation states themselves, so the estimated state vector by this state estimation or filtering technique is for the total states or the navigation states, and the system model is a total-state system model which transition the previous total-state to the current total-state. The traditional and commonly used navigation solutions uses a linearized error-state system model (to be used with KF- based solutions), but the work in this example uses a nonlinear total-state model to avoid the linearization and approximation. [00220] The nonlinear total-state system model (also called state transition model) is given by ^^ = ^(^^^^, ^^^^, ^^^^)where ^^^^is the control input which is the inertial sensors readings that correspond to transforming the state from time epochk ^ 1 to time epoch k , this will be the convention used in this explanation for the sensor readings just used for nomenclature purposes. Furthermore, ^^ is the process noise which is independent of the past and present states and accounts for the uncertainty in the platform motion and the control inputs. The measurement model is ^^ = (^^, ^^)where ^^ is the measurement noise which is independent of the past and current states and the process noise and accounts for uncertainty in the perception sensor data. [00221] A set of common reference frames is used in this example for demonstration purposes, other definitions of reference frames may be used. The body frame of the vehicle has the X-axis along the transversal direction, Y-axis along the forward longitudinal direction, and Z-axis along the vertical direction of the vehicle completing a right-handed system. The local-level frame is the ENU frame that has axes along East, North, and vertical (Up) directions. The inertial frame is Earth-centered REFERENCE NO: TPI-079PCT US PATENT APPLICATION inertial frame (ECI) centered at the center of mass of the Earth and whose the Z-axis is the axis of rotation of the Earth. The Earth-centered Earth-fixed (ECEF) frame has the same origin and z-axis as the ECI frame but it rotates with the Earth (hence the name Earth-fixed). 3.2.2.1 The System Model [00222] The system model is the state transition model and since this is a total state system model, this system model transitions from the total state of the previous iteration k ^ 1 to the total state of the current iteration k . Before describing the system model utilized in the present example, the control inputs are first introduced. The measurement provided by the IMU is the control input; ^ ^ ^ ^^^^^^^ ^ ^^ ^^^^ ^^ ^ ^^ ^^ ^ ^^ ^^^^ ^^ ^ ^^ ^ readings of the accelerometer triad, , , gyroscope triad. As mentioned earlier these are the sensors’ readings that correspond to transforming the state from time epoch k ^ 1 to time epoch k , this is the convention used in this explanation for the sensor readings just used for nomenclature purposes. [00224] To describe the system model utilized in the present example, which is the nonlinear total-state system model, the total state vector has to be described first. The state consist of the navigation states themselves, and the errors in the sensors readings (i.e. the errors in the control input). The navigation states are ^^ , ^ ,h , v E N U ^ T ^ k k k k , v k , v k , p k , r k , A k ^ , which are the latitude, longitude, altitude, velocity along East, North, and Up directions, pitch, roll, and azimuth, respectively. The errors associated with the different control inputs (the sensors’ errors): ^^^f x y z x y z k ^ f k ^ f k ^^ k ^^ k ^^ k ^ T ^ where ^f x k , ^f y k , and ^f z k are the stochastic x ^^ ^^ z stochastic errors in body frame to the local-level frame at time k ^ 1 REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^ ^ ^ 1 ^ ^ k ^ 1 ^ ^ ^ Modeling Sensors’ Errors [00225] A system model for the sensors’ errors may be used. For example, the traditional model for these sensors errors in the literature is the first order Gauss Markov model, which can be used here, but other models can be used as well. For example, a higher order Auto-Regressive (AR) model to model the drift in each one of the inertial sensors may be used and is demonstrated here. The general equation of the AR model of order p is in the form ^ ^^ = ^ ^^^^^^ + ^^^^ ^^ ^ where ^^ is white noise which is the input to the AR model, ^^ is the output of the AR model, the ^ ’s and ^ are the parameters of the model. It should be noted that such a higher order AR model is difficult to use with KF, despite the fact that it is a linear model. This is because for each inertial sensor error to be modeled the state vector has to be augmented with a number of elements equal to the order of the AR model (which is 120). Consequently, the covariance matrix, and other matrices used by the KF will increase drastically in size (an increase of 120 in rows and an increase of 120 in columns for each inertial sensor), which make this difficult to realize. [00226] In general, if the stochastic gyroscope drift is modeled by any model such as for example Gauss Markov (GM), or AR, in the system model, the state vector has to be augmented accordingly. The normal way of doing this augmentation will lead to, for example in the case of AR with order 120, the addition of 120 states to the state vector. Since this will introduce a lot of computational overhead and will require an increase in the number of used particles, another approach is used in this work. The flexibility of the models used by PF was exploited together with an approximation that experimentally proved to work well. The state vector in PF is augmented by only one state for the gyroscope drift. So at the k-th iteration, all the values of the gyroscope drift state in the particle population of iteration k-1 will be propagated as usual, but for the other previous drift values from k-120 to k-2, only the mean of the estimated drift will REFERENCE NO: TPI-079PCT US PATENT APPLICATION be used and propagated. This implementation makes the use of higher order models possible without adding a lot of computational overhead. The experiments with Mixture PF demonstrated that this approximation is valid. [00227] If 120 states were added to the state vector, i.e. all the previous gyroscope drift states in all the particles of the population of iteration k-120 to k-1 were to be used in the k-th iteration, then the computational overhead would have been very high. Furthermore, when the state vector is large PF computational load is badly affected because a larger number of particles may be used. But this is not the case in this implementation because of the approximation discussed above. Attitude Equations [00228] One suitable technique for calculating the attitude angles is to use quaternions through the following equations. The relation between the vector of quaternion parameters and the rotation matrix from body frame to local-level frame is as follows: ^ 0.25 R ^ (3,2)^ R ^ (2,3 4 ^ ^ q1 ^ b , k ^ 1 b , k ^ 1 ) ^ / q k ^ 1 k ^ 1 ^ ^ ^ ^ ^ ^ ^ ^ ^ the stochastic errors as well as the Earth rotation rate and the change in orientation of the local-level frame. The latter two are monitored by the gyroscope and form a part of the readings, so they have to be removed in order to get the actual turn. These angular rates are assumed to be constant in the interval between time steps k-1 and k and they are integrated over time to give REFERENCE NO: TPI-079PCT US PATENT APPLICATION t , where ^ e is the Earth rotation R M is the Meridian radius of curvature of the a ^ 1 ^ e 2 ^ 3 Earth’s reference ellipsoid and is given k ^ 1 ^ , R N is the normal radius of curvature of the Earth’s reference ellipsoid and is given by a R N , k ^ 1 ^ 1 t is the sampling time, a is the of the Earth’s ellipsoid a 2 ^ b 2 e ^ ^ f (2 ^ f ) ^ , b is b = a (1- f ) = 6356752.3142 m f = a- b = 0.00335281 f a . q1 2 ^ 2 2 3 2 4 2 ^ k ^ ^ q k ^ ^ ^ q k ^ ^ ^ q k ^ ^ 1 . However, due to computational errors, the above for this, the following special normalization following error exists after the computation of the ^ ^1 ^ q1 2 ^ 2 2 3 2 4 2 ^ ^ k ^ ^ q k ^ ^ ^ q k ^ ^ ^ q k ^ quaternion parameters ^ then the vector of REFERENCE NO: TPI-079PCT US PATENT APPLICATION q q k k ^ quaternion parameters should be updated with 1 ^^ . [00229] The new rotation matrix from body frame to local-level frame is computed as follows: ^ R ^ 1,1 ^ R ^ 1,2 ^ R ^ 1,3 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ The attitude angles are obtained from this rotation matrix as follows: ^ ^ ^ ^ ^ R ^ 3, 2 ^ ^ ^ ^ ^ for calculating attitude angles uses a skew symmetric matrix. The skew symmetric matrix of the angle increments corresponding to the rotation vector from the local-level frame to the body frame depicted in the body frame is calculated as follows: ^ 0 ^ ^ z ^ ^ ^ y S b ^ ^ lb , k ^ ^ z 0 ^ ^ x ^ ^ ^ as follows: REFERENCE NO: TPI-079PCT US PATENT APPLICATION be a or may as ^ ^ ^ ^ follows: ^ Th attitude angles are then obtained from R l b , matrix as mentioned above. Position and Velocity Equations [00231] Next, position and velocity may be determined according to the following discussion. The velocity may be calculated as follows: ^ v E k ^ ^ v E k ^ 1 ^ ^ f x k ^ 1 ^ ^ 0 ^ ^ N ^ ^ ^ N ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ t ^ ^ may a such example: g ^ a 1 ^ a sin2^ ^ a sin 4 ^ ^ a ^ a sin 2 ^ h ^ 2 k 1 ^ 2 k 3 k ^ ^ 4 5 k ^ k a 6 ^ h k ^ where, the coefficients a 1 through a 6 for Geographic Reference System (GRS) 1980 are defined as: a = 9.7803267714 m / s 2 ; a 2 = 0.0052790414; a 3 = 0.0000232718; a =^ 0.000 2 2 4 0030876910891/ s ; a 5= 0.0000000043977311/ s ; a = 0.000000000000721 2 6 1/ ms . Next, one suitable calculation for the latitude may be as follows: REFERENCE NO: TPI-079PCT US PATENT APPLICATION t Similarly, one suitable calculation for the longitude may be expressed as: ^ E ^ ^ t One suitable calculation for the altitude may be given by: ^ ^ dh ^ ^ ^ Up ^ t [00232] Again, it should be recognized that the system model equations for attitude, velocity and position may be implemented differently, such as for example, using a better numerical integration technique for position. Furthermore, coning and sculling may be used to provide more precise navigation states output. 3.2.3 System Model With Another Integration Filter Embodiments [00233] As noted above, other embodiments use a system model comprising utilizing the solution from another state estimation technique (another filter) that integrates INS and GNSS systems to feed the system model for the state estimation technique at hand. For example, a Kalman Filter-based navigation solution (among other state estimation techniques) can be used as an input to the system model for the current state estimation technique. The solution may integrate inertial sensor data with GNSS updates using a Kalman Filter solution (an example among other state estimation techniques). Other sources of absolute updates that may be integrated into the solution, include; odometer for speed updates, magnetometer for heading updates and barometer for altitude updates. One suitable exemplary architecture is schematically depicted in FIG. 21, which shows the basic block diagram of the Kalman Filter-based positioning solution for estimating the inertial sensors error using the absolute information (obtained from various sensors) and subtracts it from the INS solution to obtain the final corrected solution. [00234] The solution from the Kalman filter-based system may include system REFERENCE NO: TPI-079PCT US PATENT APPLICATION states and an estimation of the uncertainty of the solution. The following equations depict an example of a system model for the current state estimation technique based on the other integrated solution using another state estimation technique (in this example the Kalman Filter-based solution): ^ ^ ^ ^ ^ ( ^ ^ ^ ^ ) ^ ^ the current and the previous latitude, longitude and Moreover, ^k , sol, ^ k^ 1, sol , ^ k , sol , ^ k ^ 1, sol ,h k , sol , h k ^ 1, sol are the current and latitude, Finally,^noise , ^ noise ,h noise random variables representing process noise added with the following distribution: ^ ~ ^ 0, ^^ ^,^^^ + ^ ^ ^^^^^ ^ ^ ^^^ ^,^^^ ^ ^ ^ In this example, the and represent the uncertainty in the estimated states from the Kalman Filter-based solution. The noise standard deviation is calculated based on the standard deviation of the current and previous solution. 3.3 State Estimation [00235] Next, details regarding the design of a state estimator to integrate the perception sensor data and the motion sensor data are discussed. As noted above, four exemplary perception measurement models include a range-based model, a NOL model, a perception map matching model and a closed-form model. Each may be integrated with the system models described immediately above. For the purposes of this disclosure, the integration of each perception measurement model is in the context of a Particle Filter (PF) state estimator. A PF estimator may be used when the system and/or REFERENCE NO: TPI-079PCT US PATENT APPLICATION the measurement model are nonlinear as opposed to other state estimation filters, such as Kalman Filters (KF) that require linearization. Moreover, PF estimators are more suited when the noise affecting the measurement model or the system model are non- Gaussian. In other words, PF can be used to represent multimodal error distributions. Moreover, PF estimators provide a multi-hypothesis approach, whereas a KF propagates a single hypothesis. [00236] However, other nonlinear state estimation techniques are within the scope of this disclosure. For example, another filtering approach that can be used is the Mixture PF. Some aspects of the basic PF called Sampling/Importance Resampling (SIR) PF are first discussed. In the prediction phase, the SIR PF samples from the system model, which does not depend on the last observation. In MEMS-based INS/Perception Sensor integration, the sampling is based on the system model, which depends on inertial sensor readings as control inputs, makes the SIR PF suffer from poor performance because with more drift this sampling operation will not produce enough samples in regions where the true probability density function (PDF) of the state is large, especially in the case of MEMS-based sensors. Because of the limitation of the SIR PF, it has to use a very large number of samples to assure good coverage of the state space, thus making it computationally expensive. Mixture PF is one of the variants of PF that aim to overcome this limitation of SIR and to use much lower number of samples while not sacrificing the performance. The much lower number of samples makes Mixture PF applicable in real time. [00237] As described above, in the SIR PF the samples are predicted from the system model, and then the most recent observation is used to adjust the importance weights of this prediction. The Mixture PF adds to the samples predicted from the system model some samples predicted from the most recent observation. The importance weights of these new samples are adjusted according to the probability that they came from the samples of the last iteration and the latest control inputs. [00238] During the sampling phase of the Mixture PF used in the present embodiments, some samples predicted according to the most recent observation are added to those samples predicted according to the system model. The most recent observation is used to adjust the importance weights of the samples predicted according to the system model. The importance weights of the additional samples predicted REFERENCE NO: TPI-079PCT US PATENT APPLICATION according to the most recent observation are adjusted according to the probability that they were generated from the samples of the last iteration and the system model with latest control inputs. When no perception objects are detected, only samples based on the system model are used, but when objects are detected, both types of samples are used to give better performance, particularly during regions of no perception detections. Also adding the samples from observation leads to faster recovery to true position after perception outages (durations where no objects are detected by the perception sensor). [00239] It is worth noting that a KF filter can also be used given that the sensor model and the system model are linear. If either the sensor or the system model are not linear, different forms of KF can be used like Extended Kalman Filter (EKF) to linearize the models prior to running the filter. 3.3.1 Measurement Model: Range-Based Embodiments [00240] This discussion involves using a range-based perception model to estimate the probability density function: ^^^ ^ ^ ^^^ , ^^ as detailed above. In other words, given knowledge of the map and an estimate of the state of the vehicle, the function represents the probability distribution of the measured range. The map is used along with ray-casting algorithms to estimate the true range to an object (if detected by the perception sensor) given a state of the platform. Here, ^ refers to the ^^^ range at a certain (azimuth/elevation) angle. Assuming measurements of a single scan are independent, deriving ^(^ |^ , ^) ^ ^ ^ from ^^^^ ^^^ , ^^ is reinstated as: ^(^ | ^ , ^) = ∏ ^ ^^ ^ ^ ^ ^ ^^ ^ ^^^ ^ ^^ , ^^ , . Measurement model which incorporates different factors such as environmental, perception sensor design and dynamic factors (also discussed above), can be used to tune model parameters µ^ and ^^ ^ for each measurement ^ ^ ^ . The probability of a sample of measurements denoted by ^(^^ |^^ , ^) can be directly used to weight the importance of a particle with known-state in a map. In this embodiment, the range based measurement model is integrated with the MEMS- based Total-state system model. In the context of the basic PF or the Sampling/Importance Resampling (SIR) filter, the PF is initialized by generating ^ REFERENCE NO: TPI-079PCT US PATENT APPLICATION particles using a random distribution (could be within a certain confined distance from the initial state). In the prediction stage, the system model is used to propagate the state of the ^ particles based on the inputs from the inertial sensors and the proposed system model. The state of each particle is represented by the vector x k ^^ ^^ E N U T k , ^ k ,h k , v k , v k , v k , p k , r k , A k ^ ^ , where ^ k is the latitude of the vehicle, ^ k is e, is the altitude,v E is the velocity along East direction, v N the longitud h k k k is the elocity along North direction, v U v k is the velocity along Up vertical direction, p k is the pitch angle, r k is the roll angle, and A k is the azimuth angle. [00242] The perception sensor data are then used by the measurement model to compute ^(^^ |^^ , ^) for each of the ^ particles. An importance weight is associated with each particle, proportional to how well the output of the ray-casting algorithm aligns with the measured ranges given the particle state. Then, a resampling step is necessary to randomly draw ^ new particles from the old ^ particles with replacement in proportion to the normalized importance weight of each old particle (usually these weights are normalized by dividing the weight of each particle by the sum of all weights). Hence, particles with low importance weight will have a high probability of not propagating to the next state. In other words, surviving particles usually cluster around areas with higher posterior probability. 3.3.2 Measurement Model: Nearest-Object Likelihood Embodiments [00243] This example relates to integrating the state model with a Nearest-Object Likelihood model that does not need the ray-casting operation to compute ^(^^ |^^ , ^). The first step is to filter out all measurements that are reflected from moving objects (since the map might only contain static objects). Then, the measurements to static targets are projected onto the map in the global frame based on the absolute position of the perception sensor. For example, the projected perception sensor position due to the ^^^ measurement is given by: ^ ^ ^^^,^ ^^^^,^ ^^ ^ cos (^^ ^ ^^,^ )cos (^^ ^ ^^,^ ) ^ ^ ^ ^ ^^^,^ ^ = ^ ^^^^,^ ^ + ^ ^ ^ ^ ^^ cos (^^ ^ ^^,^ )sin (^^ ^ ^^,^ ) ^ , ^ ^ ^^^^,^ ^ ^ REFERENCE NO: TPI-079PCT US PATENT APPLICATION where ^^^^^,^, ^^^^,^, ^^^^,^^ is the 3D perception sensor position in the global frame The next step is to search for the nearest object in the map to the projected position denoted by ^^ ^ , ^ ^ , ^ ^ ,^ ^. Here, it is as ^ ^^^,^ ^^^,^ ^^^ sumed that the ^^^ ^^ is equal to a Gaussian distribution with zero mean and variance ^^ ^ ^^^ error in Euclidian distance between the projected position and the nearest object in the map. Hence, the probability of a set of measurements given the state of the platform and a map can be represented by ^(^^ |^^ , ^) ^ ∏ ^ ^^ ^ ^^ ^(^ = ^^^^ ^) , where ^ ~ ^ (0, ^^ ^ ^^^ ) and ^^^^^ is the distance between the projected position of measurement ^ and the nearest object in the map. [00244] The probability of a sample of measurements denoted by ^(^^ |^^ , ^) can be directly used to weight the importance of a particle with known-state in a map in this integration of the NOL measurement model and the MEMS-based Total-state system model. In the context of the basic PF or the Sampling/Importance Re-sampling (SIR) filter, the PF is initialized by generating ^ particles using a random distribution (could be within a certain confined distance from the initial state). In the prediction stage, the system model is used to propagate the state of the ^ particles based on the inputs from the inertial sensors and the proposed system model. The state of each particle is represented by the vector ^ ^ ^ ^ ^ = [^^, ^^, ^, ^^, ^^ , ^^ , ^^, ^^, ^^] ^, where^ k is the latitude of the vehicle, ^ k is the long East direction, v N U a k is the velocity along North direction, vk is the velocity along Up vertical direction, p k is the pitch angle, r k is the roll angle, and A k is the azimuth angle. [00245] The perception sensor data is then used to project the platform’s state in the map and then the distance to the nearest object is used as an input to the measurement model to compute ^(^^ |^^ , ^) for each of the ^ particles. An importance weight is associated with each particle, proportional to the proximity of the projected state to the nearest object in the map. Then, a resampling step is necessary to randomly draw ^ new particles from the old ^ particles with replacement in proportion to the normalized importance weight of each old particle (usually these weights are normalized by dividing the weight of each particle by the sum of all weights). Hence, REFERENCE NO: TPI-079PCT US PATENT APPLICATION particles with low importance weight will have a high probability of not propagating to the next state. In other words, surviving particles usually cluster around areas with higher posterior probability. [00246] As noted above, the basic SIR PF filter has certain limitations because the samples are only predicted from the system model and then the most recent observation is used to adjust the importance weights of this prediction. The Mixture PF adds to the samples predicted from the system model additional samples predicted from the most recent observation. The importance weights of these new samples are adjusted according to the probability that they came from the samples of the last iteration and the latest control inputs. In the context of NOL-based perception measurement model, a suitable method generates particles drawn from the measurement model. This may be done by searching for states, for which the object list detected by the perception sensor is closely aligned (i.e., a high probability of perception detection given the current state) with objects in the map. A measure of how aligned the object list given a specific state, is by computing the probability of most recent measurement denoted by ^(^^ |^^ , ^) using the NOL perception measurement model. If such states are found, it is possible to generate particles drawn from the measurement model rather than the system model. For this process to be efficient, search space (infinite number of states) within the map should be considered so that constraints can be effectively applied to limit the search space and consequently reduce computational complexity. After new particles are successfully drawn from the NOL-based measurement model, the importance weights of these new samples are adjusted according to the probability that they came from the samples of the last iteration and the latest control inputs. 3.3.3 Measurement Model: Perception Map-Matching Embodiments [00247] In this example, a perception map matching measurement model is integrated with the system model. As noted above, the map matching model is based on applying matching algorithms between a local map (obtained by scene reconstruction) and the reference map (e.g., the online map built from perception sensor data or in a conventional approach, a preexisting global map) as means of measuring the likelihood of a sample given the knowledge of the state and the map. The local map is defined as a map created based on the perception sensor data, such as the scene reconstruction discussed above. The reference map can either be a feature-based or location-based REFERENCE NO: TPI-079PCT US PATENT APPLICATION map. Regardless of the type of map used, it can be converted to the appropriate format (e.g. OGM map) to be able to match it directly to the perception sensor data. [00248] Assuming a 3D map, the center of a grid cell in the reference and local map can be denoted by ^ ^^ ^,^,^ and ^^ ^^ ,^ ^ ,^ respectively. The linear correlation can be used to indicate the likelihood of the current local map matching the reference map, given the current state of the vehicle. Let us denote to the mean of the relevant section of the reference map by ^^^ and the mean of the local map by ^^^^. Moreover, we denote to the standard deviation of the reference and local map by ^^^^ and ^^^^^ respectively. Hence, the correlation coefficient between the local and reference map is represented by: ^(^^^^ |^^ , ^) = ^^^^^^^ ,^^^^ ∑^,^,^ ^^^ ^^^ ^,^,^ ^^ ^^^^ ^^ ^^ ^,^,^ ^^^^^ ^ ) [00249] - a positive correlation or no correlation at all is significant so it can be assumed all negative correlations equal 0, allowing the likelihood of a perception sample to be represented as ^(^^ |^^ , ^) ^ max ( ^^^^^^^ ,^^^^ , 0 ). [00250] The probability of a perception sample of measurements denoted by ^(^^ |^^ , ^) can be directly used to weight the importance of a particle with known- state in a map. Here, we discuss the integration of the Perception Map-Matching measurement model and the MEMS-based Total-state system model. In the context of the basic PF or the Sampling/Importance Re-sampling (SIR) filter, the PF is initialized by generating ^ particles using a random distribution (could be within a certain confined distance from the initial state). In the prediction stage, the system model is used to propagate the state of the ^ particles based on the inputs from the inertial sensors and the proposed system model. The state of each particle is represented by the vector ^^ = [^^, ^^, ^, ^ ^ ^ , ^ ^ ^ , ^ ^ ^ , ^^, ^^, ^^] ^, where ^ k is the latitude of the vehicle,^ E N k is the longitude,h k is the altitude, vk is the East direction, vk is the along North direction,v U k is the velocity along Up vertical direction, p k is the REFERENCE NO: TPI-079PCT US PATENT APPLICATION pitch angle, r k is the roll angle, and A k is the azimuth angle. [00251] The perception sensor data is then used to create the local map and then the local map is iteratively matched with reference map to obtain the correlation of the current sample given the state of the particle. The computed correlation is then used to infer ^(^^ |^^ , ^) for each of the ^ particles. An importance weight is associated with each particle, proportional to the correlation between the local map and the reference map given the state of the particle. Then, a resampling step is necessary to randomly draw ^ new particles from the old ^ particles with replacement in proportion to the normalized importance weight of each old particle (usually these weights are normalized by dividing the weight of each particle by the sum of all weights). Hence, particles of low importance weight will have a high probability of not propagating to the next state. In other words, surviving particles usually cluster around areas with higher posterior probability. [00252] Again, the basic SIR PF filter has certain limitations because the samples are only predicted from the system model and then the most recent observation is used to adjust the importance weights of this prediction. Thus, in this example using a Mixture PF allows the addition of further samples predicted from the most recent observation in addition to the samples predicted from the system model. The importance weights of these new samples are adjusted according to the probability that they came from the samples of the last iteration and the latest control inputs. In the context of perception map matching measurement model, a method may be employed that generates particles drawn from the measurement model. This may be done by searching for states, for which the measurement from the perception sensor is closely aligned (i.e., a high probability of perception detection given the current state) to measurements from the reference map at specific locations. A measure of how aligned the perception measurement given a specific state is by computing the probability of most recent measurement denoted by ^(^^ |^^ , ^) using the map-matching correlation factor. In other words, several matches can be found by matching the local map to the reference map at different states and savings states where the match between the local map and the reference map results in a high correlation factor. If such states are found, it is possible to generate particles drawn from the measurement model rather than the system model. For this process to be efficient, the search space (infinite number of states) REFERENCE NO: TPI-079PCT US PATENT APPLICATION within the reference map should be considered to effectively apply constraints that limit the search space and consequently reduce computational complexity. After new particles are successfully drawn from the map-matching based measurement model, the importance weights of these new samples are adjusted according to the probability that they came from the samples of the last iteration and the latest control inputs. 3.3.4 Measurement Model: Closed-Form Embodiments [00253] This section discusses the integration of a closed form model with the system model and includes specific, but non-limiting examples. As noted above, the closed form perception model is a non-probabilistic modelling approach that assumes objects detected by the perception sensor can be identified uniquely as objects in the map. Given an estimate of the range to an object and knowledge of which object is detected by the perception sensor in the map, a correspondence may be determined that relates the measurements to the states of the platform in the closed-form model. [00254] In this tightly-coupled Mixture PF integration, perception sensor raw data is used and is integrated with the inertial sensors. The perception sensor raw data used in the present navigation module in this example are ranges. In the update phase of the integration filter the ranges and range-rates can be used as the measurement updates to update the position and velocity states of the vehicle. The measurement model that relates these measurements to the position and velocity states is a nonlinear model. [00255] As is known, the KF integration solutions linearize this model. PF with its ability to deal with nonlinear models may provide improved performance for tightly- coupled integration because it can use the exact nonlinear measurement model, in addition to the fact that the system model may be a nonlinear model. ^ m ^ ^ [00256] The traditional techniques relying on KF used to linearize these equations about the range estimate obtained from the inertial sensors mechanization. PF is suggested in this example to accommodate nonlinear models, thus there is no need for linearizing this equation. A suitable nonlinear perception range model for M detected objects is: REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^ ^ M ^ ^ ^ [00257] Since the position state x in the above equation is in ECEF rectangular coordinates, it may be translated to Geodetic coordinates for the state vector used in the Mixture PF. The relationship between the Geodetic and Cartesian coordinates is given by: ^ x ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ , where R N is the normal radius of curvature of the Earth’s ellipsoid and e is the eccentricity of the Meridian ellipse. Thus, the range model for Geodetic coordinates may be represented by: 1 ^ ^ R ^ h cos ^ cos ^ ^ x 1 2 ^ R ^ h cos ^ sin ^ ^ y 1 2 2 ^ R 1 ^ e 2 ^ h sin ^ ^ z 1 ^ b ^ ^ ^ 1 ^ ^ ^ ^ c ^ ^ ^ N ^ ^ ^ ^ N ^ ^ ^^ N ^ ^ ^ ^ r ^ ^ ^ ^ ^ ^ M ^ ^ ^ ^ r^m ^ 1 m ( v ^ v m m m m m x x x ) ^ 1 y ( v y ^ v y ) ^ 1 z ( v z ^ v z ) . as ^ ^ m ^ 1 m ( v ^ v m ) ^ 1 m ( v ^ m m m m x x x y y v y ) ^ 1 z ( v z ^ v z ) ^ ^ ^ ^ ^ 1 m ( v ^ v m ) ^ 1 m ( v ^ v m ) ^ 1 m ( v ^ v m ) ^ ^ m ^ ^ , . [00258] It will be appreciated that the above equation is linear in velocities, but it is nonlinear in position. This can be seen by examining the expression for the line of REFERENCE NO: TPI-079PCT US PATENT APPLICATION sight unit vector above. Again, there is no need for linearization because of the nonlinear capabilities of PF. The nonlinear model for range-rates of M targets, again in ECEF rectangular coordinates is: ^ ^ ^ 1 ^ 1 ^ 1 ^ 1 ^ 1 ^ 1 ^ ^ 1 ^ ^ ^ ^ ^ ^ The velocities here are in ECEF and need to be in local-level frame because this is part of the state vector in Mixture PF. The transformation uses the rotation matrix from the evel frame to ECEF (R e local-l ^ ) and is as follows: ^ v x ^ ^ v e ^ ^ ^ sin ^ ^ sin ^ cos ^ cos ^ cos ^ ^ ^ v e ^ ^ ^ ^ ^ ^ ^ ^ n ^ ^ ^ ^ object to the perception sensor will be expressed as follows: ^ T ^ ^ R N ^ h ^ cos ^ cos ^ ^ x m ^ ,^ ^ R ^ h ^ cos ^ sin ^ ^ y m ^ ,^^ R ^ 1 ^ e 2 ^ ^ h ^ sin ^ ^ z m ^ ^ 1 m ^ ^ N N ^ 2 ^ perception detected objects. [00259] Next, these concepts are illustrated in the following non-limiting examples. 3.3.4.1 Example 1: Measurement Model for Error-State System Model [00260] As discussed, the measurement model is a nonlinear model that relates the difference between the mechanization estimate of the ranges and range-rates and the perception sensor raw measurements (range measurements and range-rates) at a time REFERENCE NO: TPI-079PCT US PATENT APPLICATION epoch k, ^^^, to the states at time k, ^^^, and the measurement noise ^^. First, the perception sensor raw measurements are ^^ = [^^ ^ ^^ ^ ^^ ^ ^ ^ ^ ]^ for ^ detected objects. The nonlinear measurement model for the error-state model can be in the form: ^^^ = (^^^, ^^), where ^^^ = ^,^^^^ ^,^^^ ^,^ ^ ^^ ^ ^ ^^^ ^ ^,^^^ ^,^^^^ ^,^^^ ^,^^^^ ^,^^^ ^ ^,^ ^ ^,^ ^^ ^^ ^^ ^^ ^ and ^^ = ^^^ ~ , ^ ^ ^^ ~ , ^ ^ ^^ ^ ,^ ^^ ^ ^ ,^ ^ . model for the ranges is as follows: 1, Mech 1, ^ Mech 1 2 Mech 1 2 Mech 1 2 ^ ^ ^ rad k ^ ^ c , k ^ ^ ^ x k ^ x k ^ ^ ^ y k ^ y k ^ ^ ^ z k ^ z k ^ ^ ^ ^ ^ ^ M ^ ^ ^ ^ ^^^^ ^ ^^ ^^^^ ^,^ + ^^^^ ^ ^ ^^^ ^ ^^^^ ^^^^ ^ ^^^ ^^ ^ ^^^^ = ^^^^ = ^ ^^ ^^^^ + ^^^^^ ^^^ ^ ^^^^ ^^ ^^^^ ^ ^ ^^ ^ ^,^ ^ ^ ^ ^^ ^ the REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^^^^ ^ and ^ ^ ^ = [^^ ^ ^^ ^ ^^ ^]^ is the position of the m th perception detected object. [00262] The part of the measurement model for the range-rates is: ^ ^,^^^^ ^,^^^ ^ ^^ ^ ^ ^ REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^^^^ ^^^^ ^^^^ ^^^^ ^^^^ ^^ ^^ [00263] Furthermore, the mechanization version of the line-of-sight unit vector from ^^^ detected object to perception sensor receiver is expressed as follows: ^ ^ Mech ^ m ^ ^ Mech T ^ m ^ ^ Mech ^ m ^ ^ 2 ^ where the receiver position from mechanization is as defined above. [00264] The corrected (or estimated) version of the line-of-sight unit vector from ^^^ detected object to perception sensor receiver is expressed as follows: ^ ^ x Corr ^ x m ^ , ^ y Corr ^ y m , z Corr T ^ z m ^ ^ k k k k ^ ^ k k ^ 1m , Corr ^ ^ 2 ^ , 3.3.4.2 Example 2: Measurement Model for (1) Total-State System Model, (2) System Model With Another Integration Filter [00265] In this example, whether in the case of a total state system model or a system model based on another integration filter or state estimation technique, the measurement model of the current nonlinear state estimation technique is a nonlinear model that relates the perception sensor raw measurements (range measurements and range-rates) at a time epoch k, ^^, to the states at time k ,^^, and the measurement noise ^^. Moreover, this also applies to a system model based on another integration filter solution. First, the perception sensor raw measurements are ^^ = ^,^^^ ^,^^^ ^, ^ ^^^,^ ^^,^ ^ ^^^ ^ ^ ^,^^^ ^ ^ for ^ detected objects. The nonlinear REFERENCE NO: TPI-079PCT US PATENT APPLICATION measurement model can be in the form: ^^ = (^^, ^^) where ^^ ~ ^ ^~ ^ ^ ^ ^ ^ ^ The part of the measurement model for the ranges is: ^ ^,^^^ ^(^^ ^ ^ ^)^ + (^^ ^ ^ ^)^ + (^^ ^ ^ ^)^ + ^^ ^̃ ^ , where the ^^ ^^^,^ + ^^ ^^^ ^^ ^^^ ^^ ^ Where ^ ^ ^ = [^^ ^ object. the measurement model for the range-rates is: ^ ^,^^^ ^ 1 ^ ^,^ .(^^,^ ^ ^ ^,^ )+ 1 ^ ^,^ .(^ ^ ^,^ ^^,^ )+ 1 ^ ^ ^ ^,^ .(^^,^ ^^,^ )+ ^^ ^ ^ = ^ ^ ^ ^ ^ = [ ^ ^ ^ ^ ^ ^] is perception sensor velocity in the ECEF frame and thus: ^^,^ ^^,^ ^^^ ^^ ^^^ ^^ ^^^ ^^ ^^^ ^ ^^^ ^ ^^,^ ^ ^,^ ^ = ^ ^,^^ ^ ^ ^ ^^,^ ^ ^^,^ ^ = ^ ^^^ ^^ ^^^ ^^ ^^^ ^^ ^^^ ^^ ^^^ ^^ ^ ^ ^^,^ ^ ^ ^ ^^^ ^ ^^^ ^ [( ^ ^ ^ ^ ), (^ ^ ^ ^ ), (^ ^ ^)] 1 ^ ^ ^ ^ ^ ^ ^ ^ = = ^1^ , 1 , 1 ^ ^ ^ ^(^ ^ ^ ^^)^ + (^ ^ ^ ^^)^ + (^^ ^^ ^)^ ,^ ^,^ ^,^ where the perception sensor position is as defined above. 3.4 Other Optional Perception Sensor-Based Updates REFERENCE NO: TPI-079PCT US PATENT APPLICATION 3.4.1 Perception Sensor/Map-Based Positioning [00266] The above discussion of state estimation has included integrating perception sensor observables (e.g., ranges) with MEMS-based sensors using a tightly- coupled approach for state estimation. In a further aspect, the perception sensor data and the map may be used to estimate the state of the platform in the global frame at any given time and then integrating the perception sensor estimated states with MEMS- based sensors using a loosely-coupled integration approach. The map may include a feature-based map, location-based map or both. The position of the vehicle maybe estimated by matching the current sample from the perception sensor with a surveyed database of samples, where each sample is associated with a state. The sample that results in the highest match indicator (e.g., correlation factor) can be used to infer the state of the vehicle estimated by the perception sensor/map integration. Another approach is to use unique features in the map that can be detected by the perception sensor, and once detected, a position can be inferred. For example, if the perception sensor detects a very specific distribution of road signs across its field of view, the equivalent geometric distribution of signs can be searched for in the map and thereby infer position based on the perception sensor map match, previous estimate of the platform position and other constraints. Motion constraints like non-holonomic constraints can be applied to limit the search space for a match within the map. [00267] These loosely-coupled approaches may be employed with the error-state or the total-state system model. In the case of error-state system model, the loosely- coupled integration uses position and velocity updates from the perception sensor/map estimator. Thus the measurements are given as z ^^^^ ^^^^ ^ ^ ^ = ^^ ^^^ ^,^^^^ ^,^^^^ ^,^^^^ ^ ^^ ^ ^^ ^^ ^^ ^ , which consists of the and velocity The measurement model can therefore be given as REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^^^^ ^^^ ^ , noise in the perception [00268] In the case of total-state system model, the loosely-coupled integration uses position and velocity updates from the perception sensor/map estimator. Thus, the measurements are given as ^ ^ ^ ^ ^^ ^ ^^ ^^ noise in the perception 3.4.2 Perception Sensor Doppler Shift Update [00269] One of the main observables for some types of perception sensors is the doppler information associated with each target. This raw data is independent from the perception sensor range estimation. The incoming frequency at the sensor receiver is not exactly the frequency of the reflected signal by the target but is shifted from the original value transmitted by the perception sensor. This is called the Doppler shift, and it is due to relative motion between the object/target and the perception sensor receiver. The Doppler Shift from the ^^^ object is the projection of relative velocities (of object and receiver) onto the line-of-sight vector multiplied by the transmitted frequency and divided by the speed of light, and is given by: ^^ = {(^ ^ ^ ^).^ ^ }^^^ ^ REFERENCE NO: TPI-079PCT US PATENT APPLICATION Where ^^ = [^^ ^ ^^ ^ ^^ ^ ] is the ^^^ object velocity in the ECEF frame, ^ = [^ ^ ^ ^ ^ ^ ] true receiver velocity in the ECEF frame, ^^^ is the perception sensor’s transmitted frequency, and ^^ ^ ^^ ^^ ^ ^ ^ is the true line-of-sight vector reflection from the ^^^ object to the receiver. [00270] Given that ^^ is a direct observable and ^^ and ^^^ (frequency of the transmitted signal) are known, the velocity ^ is the only unknown. The part of the measurement model for the range-rates is: ^ ^,^^^ ^ 1 ^ ^,^ .(^ ^ ^ ^ ^,^ ^^,^ )+ 1 ^ ^,^ .(^^,^ ^^,^ )+ 1 ^ ^,^ .(^ ^ ^,^ ^^,^ )+ ^^ ^ ^ ^ ^ = ^^^, ^^, ^^^ is the true perception sensor receiver velocity in the ECEF frame and thus: ^^,^ ^^,^ ^^^ ^ ^^^ ^ ^^^ ^ ^^^ ^ ^^^ ^ ^^,^ ^ ^,^ ^ = ^ ^,^^^ ^ ^ ^ ^ ^ ^ ^ ^ ^^,^ ^ = ^ ^^^ ^^ ^^^ ^^ ^^^ ^^ ^^^ ^^ ^^^ ^ ^ ^ ^^,^ ^ receiver is expressed as follows: [(^ ^ ^ ^ ^ ), (^^ ^ ), (^ ^ ^ ) ]^ ^ 1 ^ ^ = ^ ^ ^ ^ = ^1 ^ ^ ^ ^ ^ , 1 , 1 ^ ^ ( ^ ^ ^ ^ )^ + (^ ^ ^ ^^ )^ + (^^ ^^ ^)^ ,^ ^,^ ^,^ Where [00271] This absolute update from the perception sensor may be used in conjunction with one of the first three proposed measurement models, namely, the Range-based model, the Nearest-Object Likelihood model or the perception sensor map-matching model to influence the importance weight of the particles. 3.5 Misalignment Detection and Estimation [00272] In yet another aspect, the techniques of this disclosure may be applied to detect and determine misalignment. As noted above, misalignment may refer to either a mounting misalignment when the device is strapped to the platform or a varying REFERENCE NO: TPI-079PCT US PATENT APPLICATION misalignment when the device is non-strapped. Thus, an optional misalignment procedure may be performed to calculate the relative orientation between the frame of the sensor assembly (i.e. device frame) and the frame of the moving platform. The following discussion includes four specific, non-limiting examples. [00273] The device heading, pitch, and roll (attitude angles of the device) can be different than the heading, pitch, and roll of the platform (attitude angles of the platform) and to get a navigation solution for the platform and/or device (processed on the device) with accuracy, the navigation algorithm should have the information about the misalignment as well as the absolute attitude of the platform. This misalignment detection and estimation is intended to enhance the navigation solution. To improve the navigation by applying constraints on the motion of the moving platform (for example in the form of specific updates), the platform attitude angles must be known. Since the device attitude angles are known, the misalignment angles between the device and platform frame are required to obtain the platform attitude angles. If the misalignment angles are known the below constraints are examples of what can be implemented to constrain the navigation solution especially during long absolute velocity outages (such as GNSS signal outages). Exemplary usages include nonholonomic constraints, vehicular dead reckoning, and any other position or velocity constraint that may be applied to the platform after the resolution of the attitude. Example 1: Heading Misalignment Using Absolute Velocity Updates [00274] In a first example, absolute velocity updates are used to estimate heading misalignment. In order to calculate the portable device heading from gyroscopes an initial heading of the device has to be known. If an absolute velocity source (such as from GNSS) is not available (for example because of interruption) but a magnetometer is available and with adequate readings, it will be used to get the initial device heading. If an absolute velocity source is available and if a magnetometer is not available or not with adequate readings, the velocity source will be used to get the initial heading of the moving platform, and a routine is run to get the initial heading misalignment of the portable device with respect to the moving platform (which is described below), then the initial device heading can be obtained. If an absolute velocity source is available and if a magnetometer is available and with adequate readings, a blended version of the initial device heading calculated from the above two options can be formed. REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00275] This example details a suitable routine to get the initial heading misalignment of the portable device with respect to the moving platform if an absolute velocity source is available (such as GNSS). This routine needs: (i) a very first heading of the platform (person or vehicle or other) that can be obtained from the source of absolute velocity provided that the device is not stationary, (ii) the source of absolute velocity to be available for a short duration such as for example about 5 seconds. [00276] The procedure of this routine is to use the absolute velocity in the local level frame to generate acceleration in the local level frame, add gravity acceleration from a gravity model, then use the pitch and roll together with different heading values (device heading corrected for different heading misalignment values) to calculate the accelerations (more literally the specific forces) in the estimated sensor frame. The different heading misalignments are first chosen to cover all the 360 degrees ambiguity. The actual accelerometer readings, after being corrected for the sensor errors (such as biases, scale factors and non-orthogonalities), are compared to all the different calculated ones (example of techniques that can be used here are correlation techniques). A best sector of possible heading misalignments is chosen and divided into more candidates of heading misalignment in this sector. Different accelerations in the estimated sensor frame are generated and again compared to the actual sensor readings. The operation continues either until the accuracy of the solution saturates and no longer improves or until a pre-chosen depth of comparisons is received. [00277] As mentioned above, if an absolute velocity source (such as from GNSS) is not available (for example because of interruption) but a magnetometer is available and with adequate readings, it will be used to get the initial device heading. If an absolute velocity source is available and if a magnetometer is not available or not with adequate readings, the velocity source will be used to get the initial heading of the moving platform when it starts moving as ^ ^^^^^^^^ ^ = ^^^^2(^ ^ ^, ^^ ^) , where k in general is the time index of the absolute velocity readings, and k=0 for the first reading. A routine is run to get the initial device with respect to the moving platform (this routine is described below), then the initial device heading is obtained as, where a magnetometer is available and with adequate readings, a better blended version of the initial device heading calculated from the above- mentioned two options can be formed. REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00278] The routine needs the track of heading of the platform (vehicle or other) during a short period (such as for example, of about 5 seconds), but there are almost no constraints on platform motion during this period except that the platform cannot be stationary the whole period, but temporary static period is accepted. This heading can be obtained by either one of the following: i) the first heading of the platform that can be obtained from the source of absolute velocity provided that the platform is not stationary, this heading is followed (for example) by a gyroscope-based calculation of heading to keep track of the platform heading if the device misalignment with respect to the platform is kept near constant (might slightly change but does not undergo big changes). ii) the track of absolute heading of the platform might be obtained from the absolute source of velocity during the short period during which this routine will run. If during this period the platform stops temporarily the last heading is used for the temporary stop period. The routine also needs the source of absolute velocity to be available for the same short period discussed above. This means that ^^ ^ , ^^ ^ , and ^^ ^ have to be available during this short period, at whatever data rate this absolute source provides. [00279] The first step of this routine is to use the absolute velocity in the local level frame to generate acceleration in the local level frame e e a e ^ v k ^ v k ^ 1 k ^ t 1 1 , where ^ rate of the absolute velocity source. The next step is to add gravity a gravity model to get specific forces in the local level frame ^ f e k ^ ^ a e k ^ ^ 0 ^ ^ f n ^ ^ ^ a n ^ ^ ^ ^ k ^ k ^ 0 u ^ ^ ^ ^ ^ ^ ^ u ^ ^ together with REFERENCE NO: TPI-079PCT US PATENT APPLICATION different candidate device heading values (calculated from the platform heading corrected for different candidate heading misalignment values) to calculate the accelerations (more literally the specific forces) in the estimated candidate sensor frame. Different heading misalignments are first chosen to cover all the 360 degrees ambiguity, for example, if the heading space is divided equally to 8 options, the following misalignments are the possible candidates to use candidate ^ ^ ^ 3 pi ^ pi ^ pi pi pi 3 pi ^ ^ ^ [00280] A rotation matrix for conversion from the device frame (i.e. the accelerometer frame) to the local level (ENU) frame may be used as follows ^ f x , candidate e k ^ ^ f ^ ^ k f y , candidate ^ ^ ^ T ^ n ^ ^ ^ ^ R ^ ^ f ^ ^ ^ x y T are ^^f z j f j f j ^ ^ where j is the timing index for the higher rate inertial readings (preferably readings are used after removal of the estimated sensor errors). readings are down- sampled to the relatively lower rate of the absolute velocity readings, for example, either by averaging or by dropping of the extra samples. The down-sampled version of these actual accelerometers readings are compared to all the different candidate accelerometer readings (example of comparison techniques that can be used here are correlation techniques some of which can be bias independent, differencing or REFERENCE NO: TPI-079PCT US PATENT APPLICATION calculating root mean squared (RMS) errors). A best sector of possible heading misalignments is chosen and divided into further candidates of heading misalignment in this sector. ^ 3pi [00281] For example, if the best sector was from a misalignment of 4 to a ^pi misalignment of 2 , this range will be further divided into 8 new candidates as provided below: ^A candidate one from ^ ^ 3 pi ^ 20 pi ^ 19 pi ^ 18 pi ^ 17 pi ^ 16 pi ^ 15 pi ^ pi ^ 2 ^ ^ . [00282] Then the previously described operations are repeated. Different candidate accelerations (or more literally specific forces) in the estimated sensor frame are generated and again compared to the down-sampled actual sensor readings. The operation continues either until the accuracy of the solution saturates and no longer improves or until a specific pre-chosen depth of comparison is achieved. An estimate of the misalignment between the portable device heading and the platform heading is obtained as the best ^A candidate together with an indication or measure of its accuracy from the depth of divisions the technique had undergone and the step separation of the last candidate pool for the misalignment. Thus, the initial device heading (that will be used to start the full navigation in this case) is computed from the platform heading and the estimated initial misalignment. Example 2: Heading Misalignment Using Radius of Rotation [00283] In the next example, the misalignment between a device and a platform may be determined from the radius of rotation of the device, utilizing motion sensor data in the presence or in the absence of absolute navigational information updates. Details regarding suitable techniques may be found in commonly-owned U.S. Patent No.10,274,317, issued April 30, 2019, which is hereby incorporated by reference in its entirety. Example 3: Heading Misalignment Using Acceleration and Deceleration [00284] In another example, the misalignment between a device and a platform REFERENCE NO: TPI-079PCT US PATENT APPLICATION may be determined from acceleration and/or deceleration of the platform, utilizing motion sensor data in the presence or in the absence of absolute navigational information updates. Details regarding suitable techniques may be found in commonly- owned U.S. Patent No. 9,797,727, issued October 24, 2017, which is hereby incorporated by reference in its entirety. Example 4: Pitch Misalignment Using Absolute Velocity Updates [00285] In a last illustrative example, absolute velocity updates may be used to estimate pitch misalignment. The device pitch angle can be different than the pitch angle of the platform because of mounting misalignment or varying misalignment when the device is non-strapped. To enhance the navigation solution, the pitch misalignment angle is calculated. By definition, pitch misalignment angle is the difference between the device pitch angle and the pitch angle of the platform. To calculate the pitch misalignment angle, a state estimation technique is used. One potential example of a system model that can be used is a Gauss-Markov process, while measurements are obtained from GNSS velocity and accelerometers measurements and applied as a measurement update through the measurement model. One suitable technique employs the following equations, where measurements are the difference between system and GNSS pitch angles: ^^^^^^^^^^^ = ^^^^ ^^^^^^ ^^^^ ^^^ ^^^^ ^^^^^^ = ^^^ ^^ ^ ^ ^ ^^^^ ^ ^ ^ ^^ + ^^^ System pitch angle is calculated using accelerometers readings (where ^^ is forward accelerometer reading, ^^ is lateral accelerometer reading, and ^^ is vertical REFERENCE NO: TPI-079PCT US PATENT APPLICATION accelerometer reading) and a calculated forward acceleration of the platform. The calculated forward acceleration of the platform can be either from, for example, GNSS velocity measurements (as in the above example) or from odometer speed measurements as another example. As an example, GNSS pitch angle is calculated using GNSS velocity measurements only. Obtaining a measurement and system model, a dedicated Kalman filter or particle filter can be used to obtain the final pitch misalignment angle of the system. Another option to obtain the pitch misalignment angle of the system is to use the described measurement and system models as a part of the larger system and measurement of the main integrated navigation filter (whether Kalman or Particle filter for example) and amending the states of that main filter to include the above described pitch misalignment state. A similar technique may also be applied to obtain roll misalignment. 3.6 Other Optional Observations [00286] Still further aspects of this disclosure relate to using other observables, including information from a GNSS positioning system and an odometer, as measurement updates in the state estimation technique when integrating the perception sensor data and the motion sensor data. These optional observations may be used to estimate a more accurate state. [00287] Three main types of INS/GNSS integration have been proposed to attain maximum advantage depending upon the type of use and choice of simplicity versus robustness, leading to three main integration architectures: loosely coupled, tightly coupled and ultra-tightly (or deeply) coupled. Loosely coupled integration uses an estimation technique to integrate inertial sensors data with the position and velocity output of a GNSS receiver. The distinguishing feature of this configuration is a separate filter for the GNSS and is an example of cascaded integration because of the two filters (GNSS filter and integration filter) used in sequence. Tightly coupled integration uses an estimation technique to integrate inertial sensors readings with raw GNSS data (i.e. pseudoranges that can be generated from code or carrier phase or a combination of both, and pseudorange rates that can be calculated from Doppler shifts) to get the vehicle position, velocity, and orientation. In this solution, there is no separate filter for GNSS, but there is a single common master filter that performs the integration. For the loosely coupled integration scheme, at least four satellites are needed to provide acceptable REFERENCE NO: TPI-079PCT US PATENT APPLICATION GNSS position and velocity input to the integration technique. The advantage of the tightly coupled approach is that less than four satellites can be used as this integration can provide a GNSS update even if fewer than four satellites are visible, which is typical of a real life trajectory in urban environments as well as thick forest canopies and steep hills. Another advantage of tightly coupled integration is that satellites with poor GNSS measurements can be detected and rejected from being used in the integrated solution. Ultra-tight (deep) integration has two major differences with regard to the other architectures. Firstly, there is a basic difference in the architecture of the GNSS receiver compared to those used in loose and tight integration. Secondly, the information from INS is used as an integral part of the GNSS receiver, thus, INS and GNSS are no longer independent navigators, and the GNSS receiver itself accepts feedback. It should be understood that the present navigation solution may be utilized in any of the foregoing types of integration. [00288] It is to be noted that the state estimation or filtering techniques used for inertial sensors/GNSS integration may work in a total-state approach or in an error state approach, each of which has characteristics described above. It would be known to a person skilled in the art that not all the state estimation or filtering techniques can work in both approaches. [00289] To help illustrate these above concepts, a first error state system model and total-state system model examples are described below that integrate absolute navigational information with an error-state system model. In these present examples, a three-dimensional (3D) navigation solution is provided by calculating 3D position, velocity and attitude of a moving platform. The relative navigational information includes motion sensor data obtained from MEMS-based inertial sensors consisting of three orthogonal accelerometers and three orthogonal gyroscopes, such as sensor assembly 106 of device 100 in FIG.1. A source of absolute navigational information 116 is also used and host processor 102 may implement integration module 114 to integrate the information using a nonlinear state estimation technique, such as for example, Mixture PF. The reference-based absolute navigational information 116, such as from a GNSS receiver, and the motion sensor data, such as from sensor assembly 102, are integrated using Mixture PF in either a loosely coupled, tightly coupled, or hybrid loosely/tightly coupled architecture, having a system and measurement model, REFERENCE NO: TPI-079PCT US PATENT APPLICATION wherein the system model is either a nonlinear error-state system model or a nonlinear total-state model without linearization or approximation that are used with the traditional KF-based solutions and their linearized error-state system models. The filter may optionally be programmed to comprise advanced modeling of inertial sensors stochastic drift. If the filter has the last option, it may optionally be further programmed to use derived updates for such drift from GNSS, where appropriate. The filter may optionally be programmed to automatically detect and assess the quality of GNSS information, and further provide a means of discarding or discounting degraded information. The filter may optionally be programmed to automatically select between a loosely coupled and a tightly coupled integration scheme. Moreover, where tightly coupled architecture is selected, the GNSS information from each available satellite may be assessed independently and either discarded (where degraded) or utilized as a measurement update. In these examples, the navigation solution of the device, whether tethered or non-tethered to the moving platform, is given by ^^ = [^ ^ ^ ^ ^, ^^, ^, ^^ , ^^ , ^^ , ^^, ^^, ^^] ^where ^^ is the latitude of the vehicle, ^^ is the longitude, ^ is the altitude, ^^ ^ is the velocity along East direction, ^^ ^ is the velocity along North direction, ^^ ^ is the velocity along Up vertical direction, ^^ is the pitch angle, ^^ is the roll angle, and ^^ is the azimuth angle. 3.6.1 Error-State System Model Integration [00290] In the present example, the navigation module is utilized to determine a three-dimensional (3D) navigation solution by calculating 3D position, velocity and attitude of a moving platform. Specifically, the module comprises absolute navigational information from a GNSS receiver, relative navigational information from MEMS- based inertial sensors consisting of three orthogonal accelerometers and three orthogonal gyroscopes, and a processor programmed to integrate the information using a nonlinear state estimation technique, such as for example, Mixture PF having the system and measurement models defined herein below. Thus, in this example, the present navigation module targets a 3D navigation solution employing MEMS-based inertial sensors/GNSS integration using Mixture PF. [00291] The motion model used in the mechanization is given by ^^ = ^^^^^(^^^^, ^^^^), where ^^^^ is the control input which is the inertial sensors readings that correspond to transforming the state from time epoch ^ 1 to time epoch ^, this REFERENCE NO: TPI-079PCT US PATENT APPLICATION will be the convention used in this explanation for the sensor readings for nomenclature purposes. The nonlinear error-state system model (also called state transition model) is given by ^^^ = ^(^^^^^, ^^^^, ^^^^), where ^^ is the process noise which is independent of the past and present states and accounts for the uncertainty in the platform motion and the control inputs. The measurement model is ^^^ = (^^^, ^^), where ^^ is the measurement noise which is independent of the past and current states and the process noise and accounts for uncertainty in GNSS readings. 3.6.1.1 Navigation Solution [00292] The state of the device whether tethered or non-tethered to the moving platform is ^^ = [^ ^ ^ ^ ^, ^^, ^, ^^, ^^ , ^^ , ^^, ^^, ^^] ^ , where ^^ is the latitude of the vehicle, ^^ is the longitude, ^ is the altitude, ^^ ^ is the velocity along East direction, ^^ ^ is the velocity along North direction, ^^ ^ is the velocity along Up vertical direction, ^^ is the pitch angle, ^^ is the roll angle, and ^^ is the azimuth angle. [00293] Since this is an error-state approach, the motion model is used externally in what is called inertial mechanization, which is a nonlinear model as mentioned earlier, the output of this model is the navigation states of the module, such as position, velocity, and attitude. The state estimation or filtering technique estimates the errors in the navigation states obtained by the mechanization, so the estimated state vector by this state estimation or filtering technique is for the error states, and the system model is an error-state system model which transition the previous error-state to the current error- state. The mechanization output is corrected for these estimated errors to provide the corrected navigation states, such as corrected position, velocity and attitude. The estimated error-state is about a nominal value which is the mechanization output, the mechanization can operate either unaided in an open loop mode, or can receive feedback from the corrected states, this case is called closed-loop mode. The error-state system model commonly used is a linearized model (to be used with KF-based solutions), but the work in this example uses a nonlinear error-state model to avoid the linearization and approximation. [00294] The motion model used in the mechanization is given by: ^^ = ^^^^^ (^^^^, ^^^^) REFERENCE NO: TPI-079PCT US PATENT APPLICATION where ^^^^is the control input which is the inertial sensors readings that correspond to transforming the state from time epoch ^ 1 to time epoch ^, this will be the convention used in this explanation for the sensor readings just used for nomenclature purposes. The nonlinear error-state system model (also called state transition model) is given by: ^^^ = ^(^^^^^, ^^^^, ^^^^) where ^^ is the process noise which is independent of the past and present states and accounts for the uncertainty in the platform motion and the control inputs. The measurement model is: ^^^ = (^^^, ^^) where ^^ is the measurement noise which is independent of the past and current states and the process noise and accounts for uncertainty in GNSS readings. [00295] In order to discuss some advantages of Mixture PF, which is the filtering technique used in this example, some aspects of the basic PF called Sampling/Importance Resampling (SIR) PF are first discussed. In the prediction phase, the SIR PF samples from the system model, which does not depend on the last observation. In MEMS-based INS/GNSS integration, the sampling based on the system model, which depends on inertial sensor readings as control inputs, makes the SIR PF suffer from poor performance because with more drift this sampling operation will not produce enough samples in regions where the true probability density function (PDF) of the state is large, especially in the case of MEMS-based sensors. Because of the limitation of the SIR PF, it has to use a very large number of samples to assure good coverage of the state space, thus making it computationally expensive. Mixture PF is one of the variants of PF that aim to overcome this limitation of SIR and to use much lower number of samples while not sacrificing the performance. The much lower number of samples makes Mixture PF applicable in real time. [00296] As described above, in the SIR PF the samples are predicted from the system model, and then the most recent observation is used to adjust the importance weights of this prediction. The Mixture PF adds to the samples predicted from the system model some samples predicted from the most recent observation. The REFERENCE NO: TPI-079PCT US PATENT APPLICATION importance weights of these new samples are adjusted according to the probability that they came from the samples of the last iteration and the latest control inputs. [00297] For the application at hand, in the sampling phase of the Mixture PF used in the present embodiment proposed in this example, some samples predicted according to the most recent observation are added to those samples predicted according to the system model. The most recent observation is used to adjust the importance weights of the samples predicted according to the system model. The importance weights of the additional samples predicted according to the most recent observation are adjusted according to the probability that they were generated from the samples of the last iteration and the system model with latest control inputs. When the GNSS signal is not available, only samples based on the system model are used, but when GNSS is available both types of samples are used which gives better performance and thus leads to a better performance during GNSS outages. Also adding the samples from GNSS observation leads to faster recovery to true position after GNSS outages. Measurement Model With Loosely-Coupled Integration [00298] When loosely-coupled integration is used, position and velocity updates are obtained from the GNSS receiver. Thus the measurements are given as ^^ = ^^^ ^^^ ^ ^^^ ^,^^^ ^,^^^ ^,^^^ ^ ^ ^^^ ^ ^^ ^^ ^^ ^ , which consists of the GNSS components along East, North, can therefore be given as ^ ^ ^^^ ^ ^^^ ^^ ^^^ ^^ ^^ ^^ ^ ^^^^ ^^^ ^ ^^ ^^ ^ ^ ^ , is the noise in the GNSS Tightly-Coupled Integration REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00299] In loosely-coupled integration, at least four satellites are needed to provide acceptable GNSS position and velocity, which are used as measurement updates in the integration filter. One advantage of tightly-coupled integration is that it can provide GNSS measurement updates even when the number of visible satellites is three or fewer, thereby improving the operation of the navigation system in degraded GNSS environments by providing continuous aiding to the inertial sensors even during limited GNSS satellite visibility (like in urban areas and downtown cores). [00300] Tightly-coupled integration takes advantage of the fact that, given the present satellite-rich GPS constellation as well as other GNSS constellations, it is unlikely that all the satellites will be lost in any canyon. Therefore, the tightly coupled scheme of integration uses information from the few available satellites. This is a major advantage over loosely coupled integration with INS, which fails to acquire any aid from GNSS and considers the situation of fewer than four satellites as an outage. Another benefit of working in the tightly coupled scheme is that satellites with bad measurements can be detected and rejected. [00301] In tightly-coupled integration, GNSS raw data is used and is integrated with the inertial sensors. The GNSS raw data used in the present navigation module in this example are pseudoranges and Doppler shifts. From the measured Doppler for each visible satellite, the corresponding pseudorange rate can be calculated. In the update phase of the integration filter the pseudoranges and pseudorange rates can be used as the measurement updates to update the position and velocity states of the vehicle. The measurement model that relates these measurements to the position and velocity states is a nonlinear model. [00302] As is known, the KF integration solutions linearize this model. PF with its ability to deal with nonlinear models is better capable of giving improved performance for tightly-coupled integration because it can use the exact nonlinear measurement model. This is in addition to the fact that the system model is always (in tightly or loosely coupled integration) a nonlinear model. [00303] There are three main observables related to GPS: pseudoranges, Doppler shift (from which pseudorange rates are calculated), and the carrier phase. The present example utilizes only to the first two observables. REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00304] Pseudoranges are the raw ranges between satellites and receiver. A pseudorange to a certain satellite is obtained by measuring the time it takes for the GPS signal to propagate from this satellite to the receiver and multiplying it by the speed of light. The pseudorange measurement for the m th satellite is: ^m ^c ^ t ^ t ^ ere ^ m wh is the pseudorange observation from the mth satellite to receiver (in meters), t t is the transmit time, t r is the receive time, and c is the speed of light (in meters/sec). [00305] For the GPS errors, the satellite and receiver clocks are not synchronized and each of them has an offset from the GPS system time. Despite the several errors in the pseudorange measurements, the most effective is the offset of the inexpensive clock used inside the receiver from the GPS system time. [00306] The pseudorange measurement for the m th satellite, showing the different errors contaminating it, is given as follows: ^ m ^r m ^ c ^ t ^ c ^ t ^ cI m ^ cT m m r s ^ ^ ^ range the receiver antenna at time t r and the satellite antenna at time t t (in meters), ^t r is the receiver clock offset (in seconds), ^t s is the satellite clock offset (in , I m is the ionospheric del m ay (in is the ^ m ^ is the error in range due to a combination of receiver noise and other errors such as multipath effects and orbit prediction errors (in meters). [00307] The incoming frequency at the GPS receiver is not exactly the L1 or L2 frequency but is shifted from the original value sent by the satellite. This is called the Doppler shift and it is due to relative motion between the satellite and the receiver. The Doppler shift of the m th satellite is the projection of relative velocities (of satellite and receiver) onto the line of sight vector multiplied by the transmitted frequency and divided by the speed of light, and is given by: REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^ , where ^^ = ^^^ ^ , ^^ ^ , ^^ ^ , ^ is the ^^^ satellite velocity in the ECEF frame, ^ = ^^^, ^^, ^^^ is the true receiver velocity in the ECEF frame, ^^ is the satellite transmitted [(^^ ^ ^ ),(^^ ^ ^ ),(^^ ^ ^ )] ^ ^ ^ ^ ^ ^ is the true line-of- ed Doppler shift, the pseudorange rate ^ m [00308] Given the measur ^ is calculated as follows: ^^ m ^ ^ D m c L 1 After compensating for the satellite clock bias, Ionospheric and Tropospheric errors, the corrected pseudorange can be written as: ^ m m c ^r ^ c ^ t r ^ ^ ^ m ^ where, ^ ^ m ^ represents the total effect of residual errors. The true geometric range from m th satellite to receiver is the Euclidean distance and is given as follows: rm ^ ( x ^ x m ) 2 ^ ( y ^ y m ) 2 ^ ( z ^ z m ) 2 ^ x ^ x m ECEF frame, ^^ = [^^, ^^, ^^]^ is the position of the ^^^ satellite at the corrected transmission time but seen in the ECEF frame at the corrected reception time of the signal. Satellite positions are initially calculated at the transmission time, and this position is in the ECEF frame which is not in the ECEF frame at the time of receiving the signal. This time difference may be approximately in the range of 70-90 milliseconds, during which the Earth and the ECEF rotate, and this can cause a range error of about 10-20 meters. To correct for this fact, the satellite position at transmission time has to be represented at the ECEF frame at the reception time not the transmission time. One can either do the correction before the measurement model or in the measurement model itself. In the present example, the satellite position correction is done before the integration filter and then passed to the REFERENCE NO: TPI-079PCT US PATENT APPLICATION filter, thus the measurement model uses the corrected position reported in the ECEF at reception time. [00309] The details of using Ephemeris data to calculate the satellites’ positions and velocities are known, and can subsequently be followed by the correction mentioned above. [00310] In vector form, the equation may be expressed as follows: ^ m ^ x ^ x m ^b ^ ^ m c r ^ ^ , where b r ^ c^ t r is the error in range (in meters) due to receiver clock bias. This equation is nonlinear. The traditional techniques relying on KF used to linearize these equations about the pseudorange estimate obtained from the inertial sensors mechanization. PF is suggested in this example to accommodate nonlinear models, thus there is no need for linearizing this equation. The nonlinear pseudorange model for M satellites visible to the receiver is: ^ ^1 ^ ^ x^ x 1 ^ b ^ ^ ^ 1 ^ ^ ( x ^ x 1 ) 2 ^ ( y ^ y 1 ) 2 ^ ( z ^ z 1 ) 2 ^ ^ 1 ^ ^ c r ^ ^ ^ ^ ^ b r ^ ^ ^ ^ ^ M ^ ^ ^ ^ nd desirably should be in Geodetic coordinates, which is part of the state vector used in the Mixture PF. The relationship between the Geodetic and Cartesian coordinates is given by: ^ x ^ ^ ^ R ^ h ^ cos ^ cos ^ ^ ^ ^ ^ N ^ ^ ^ ^ ^ ^ R ^ h ^ cos ^ sin ^ ^ ^ ^ sin ^ ^ ^ , radius of curvature of the Earth’s ellipsoid and e is the eccentricity of the Meridian ellipse. Thus the pseudorange model is: REFERENCE NO: TPI-079PCT US PATENT APPLICATION ^ ^ ^ ^ ^ ^ M ^ ^ ^ ^ The true pseudorange rate between the m th satellite and receiver is expressed as r^m ^ 1 m v ^ v m ^ 1 m v ^ v m ^ 1 m v ^ v m ) The pseudorange rate for the m th satellite can be modeled as follows: ^^ m ^ 1 m x ( v x ^ v m x ) ^ 1 m y ( v y ^ v m y ) ^ 1 m z ( v z ^ v m z ) ^ c ^ t ^ r ^ ^ m ^ ^ where ^t r is the receiver clock drift (unit-less), d is the receiver clock drift (in m , ^^ ^ is the error in observation (in meters/sec). [00312] This last equation is linear in velocities, but it is nonlinear in position. This can be seen by examining the expression for the line of sight unit vector above. Again, there is no need for linearization because of the nonlinear capabilities of PF. The nonlinear model for pseudorange rates of M satellites, again in ECEF rectangular coordinates is: ^ 1 ^ ^ 11 ( v ^ v 1 ) ^ 11 ( v ^ v 1 ) ^ 11 ( v ^ v 1 1 ^ ^ ^ ^ ^ ^ x x x y y y z z z ) ^ d r ^ ^ ^ ^ ^ ^ ^ ^^ ^ ^ ^ ^ ^ level frame because this is part of the state vector in Mixture PF. The transformation uses the n matrix from the local-level frame to ECEF (R e rotatio ^ ) and is as follows: ^ v x ^ ^ v e ^ ^ ^ sin ^ ^ sin ^ cos ^ cos ^ cos ^ ^ ^ v e ^ ^ v ^^ R e ^ v ^ ^ ^ cos ^ ^ sin ^ sin ^ cos ^ sin ^ ^ ^ ^ y ^ ^ ^ n ^ ^ ^ ^ ^ v n ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ REFERENCE NO: TPI-079PCT US PATENT APPLICATION Furthermore, the line of sight unit vector from mth satellite to receiver will be expressed as follows: ^ ^ T 2 ^ The foregoing combined equations constitute the overall nonlinear model for M visible satellites. [00314] When tightly-coupled integration is employed, the system model can be augmented with two states, namely: the bias of the GPS receiver clock b r and its drift ^b ^ ^ ^ d ^ w r ^ r b ^ ^ ^ ^ ^ r . Both of these are modeled , where w b and w d are the noise In discrete form, this ^br, k ^ ^ b r, k ^ 1^ ^ d r , k ^ 1 ^ w b , k ^ 1 ^ ^ t ^ ^ ^ ^ ^ ^ ^ , where ^ t is the sampling time. This model is used earlier. Hybrid Loosely/Tightly Coupled Scheme [00315] The proposed navigation module may adopt a hybrid loosely/tightly coupled solution and attempts to take advantage of the updates of the inertial sensors drift from GNSS when suitable, which rely on loosely-coupled updates from GNSS (since they rely in their calculations on GNSS position and velocity readings), as well as the benefits of tightly-coupled integration. Another advantage of loosely coupled integration that the hybrid solution may benefit from is the GNSS-derived heading update relying on GNSS velocity readings when in motion (this update is only applicable if misalignment was resolved or there is no misalignment in a certain application). [00316] When the availability and the quality of GNSS position and velocity readings passes the assessment, the loosely-coupled measurement update is performed for position, velocity, possibly azimuth (when applicable as discussed earlier), and REFERENCE NO: TPI-079PCT US PATENT APPLICATION possibly inertial sensors’ stochastic errors (if this option is used). Each update can be performed according to its own quality assessment of the update. Whenever the testing procedure detects degraded GNSS performance either because the visible satellite number falls below four or because the GNSS quality examination failed, the filter can switch to a tightly-coupled update mode. Furthermore, each satellite can be assessed independently of the others to check whether it is adequate to use it for update. This check again may exploit improved performance of the Mixture PF with the robust modeling. Thus the pseudorange estimate for each visible satellite to the receiver position estimated from the prediction phase of the Mixture PF can be compared to the measured one. If the measured pseudorange of a certain satellite is too far off, this is an indication of degradation (e.g. the presence of reflections with loss of direct line-of- sight), and this satellite’s measurements can be discarded, while other satellites can be used for the update. 3.6.2 Total-State System Model Integration [00317] In this example, the navigation module is utilized to determine a 3D navigation solution by calculating 3D position, velocity and attitude of a moving platform. Specifically, the module comprises absolute navigational information from a GNSS receiver, relative navigational information from MEMS-based inertial sensors consisting of three orthogonal accelerometers and three orthogonal gyroscopes, and a processor programmed to integrate the information using a nonlinear state estimation technique, such as for example, Mixture PF having the system and measurement models defined herein below. Thus, in this example, the present navigation module targets a 3D navigation solution employing MEMS-based inertial sensors/GNSS integration using Mixture PF. [00318] In this example the absolute navigational information from a GNSS receiver and the self-contained sensors which consist of three accelerometers and three gyroscopes are integrated using Mixture PF in either a loosely coupled, tightly coupled, or hybrid loosely/tightly coupled architecture, having a system and measurement model, wherein the system model is a nonlinear total-state system model without linearization or approximation that are used with the traditional KF-based solutions and their linearized error-state system models. The filter is programmed to comprise advanced modeling of inertial sensors stochastic drift and is programmed to use derived updates REFERENCE NO: TPI-079PCT US PATENT APPLICATION for such drift from GNSS, where appropriate. The filter may optionally be programmed to automatically detect and assess the quality of GNSS information, and further provide a means of discarding or discounting degraded information. The filter may optionally be programmed to automatically select between a loosely coupled and a tightly coupled integration scheme. Moreover, where tightly coupled architecture is selected, the GNSS information from each available satellite may be assessed independently and either discarded (where degraded) or utilized as a measurement update. Navigation Solution [00319] The state of the device whether tethered or non-tethered to the moving platform is ^^ = [^^, ^^, ^, ^ ^ ^, ^ ^ ^ , ^ ^ ^ , ^^, ^^, ^^] ^ , where ^^ is the latitude of the vehicle, ^^is the longitude, ^is the altitude, ^^ ^ is the velocity along East direction, ^^ ^ is the velocity along North direction, ^^ ^ is the velocity along Up vertical direction, ^^is the pitch angle, ^^is the roll angle, and ^^is the azimuth angle. [00320] Since this is a total-state approach, the system model is the motion model itself, which is a nonlinear model as mentioned earlier, the output of this model is the navigation states of the module, such as position, velocity, and attitude. The state estimation or filtering technique estimates directly the navigation states themselves, so the estimated state vector by this state estimation or filtering technique is for the total states or the navigation states, and the system model is a total-state system model which transition the previous total-state to the current total-state. The traditional and commonly used navigation solutions use a linearized error-state system model (to be used with KF-based solutions), but the work in this example uses a nonlinear total-state model to avoid the linearization and approximation. [00321] The nonlinear total-state system model (also called state transition model) is given by: ^^ = ^(^^^^, ^^^^, ^^^^), input which is the inertial sensors readings that correspond to time epoch ^ 1 to time epoch ^, this will be the convention used in this explanation for the sensor readings just used for nomenclature purposes. Furthermore, ^^ is the process noise which is independent of the past and present states REFERENCE NO: TPI-079PCT US PATENT APPLICATION and accounts for the uncertainty in the platform motion and the control inputs. The measurement model is: ^^ = (^^, ^^), where ^^ is the measurement noise which is independent of the past and current states and the process noise and accounts for uncertainty in GNSS readings. [00322] For the application at hand, in the sampling phase of the Mixture PF used in the present embodiment proposed in this example, some samples predicted according to the most recent observation are added to those samples predicted according to the system model. The most recent observation is used to adjust the importance weights of the samples predicted according to the system model. The importance weights of the additional samples predicted according to the most recent observation are adjusted according to the probability that they were generated from the samples of the last iteration and the system model with latest control inputs. When the GNSS signal is not available, only samples based on the system model are used, but when GNSS is available both types of samples are used which gives better performance and thus leads to a better performance during GNSS outages. Also adding the samples from GNSS observation leads to faster recovery to true position after GNSS outages. Measurement Model With Loosely-Coupled Integration [00323] When loosely-coupled integration is used, position and velocity updates are obtained from the GNSS receiver. Thus the measurements are given as ^^ = ^^ ^ ^,^ ^ ^^^ ^ ^^^ ^ ^^^ ^^ ^,^^^ ^,^^^ ^ ^^ ^^ ^^ ^ which consists of the GNSS readings for the latitude, longitude, altitude, and velocity components along East, North, and Up directions respectively. The measurement model can therefore be given as: ^ ^ ^ ^^^ ^ ^^ ^^ ^ ^ ^ ^^^ ^ ^ ^ ^ ^ ^ ^^^ ^ ^ ^ ^ , REFERENCE NO: TPI-079PCT US PATENT APPLICATION where ^ ^ ^ = ^^ ^ ^ ^^ ^^ ^^ ^ ^ ^ ^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^ is the noise in the GNSS observations used for update. Tightly-Coupled Integration Augmenting the System Model [00324] The system model can be augmented with two states, namely the bias of the GNSS receiver clock b r and its d r . Both of these are modeled as follows: ^b ^r ^ ^ d r ^ w b ^ be used Measurement Model With Tightly-Coupled Integration [00325] In this example, since this is a total state solution, the measurement model in the case of tightly-coupled integration is a nonlinear model that relates the GNSS raw measurements (pseudorange measurements and pseudorange rates) at a time epoch k, ^^, to the states at time k,^^, and the measurement noise ^^. First, the GNSS raw measurements are ^ ^,^^^ ^,^^^ ^,^^^ ^,^^^ ^ ^ = ^^^,^ ^^,^ ^^ ^^ ^ for ^ visible satellites. The nonlinear be in the form ^ = (^^, ^^) ^^ = ^^^ ^ ̃,^ ^^ ^ ̃,^ ^ ^ ^̃,^ … ^^ ^ ^ ̃,^ ^ [00326] The part of the measurement model for the pseudoranges is: ^,^^^ ^(^ ^ ^)^ ( ^)^ ( ^)^ ^ ^^,^ ^ ^ + ^^ ^^ + ^^ ^^ + ^^,^ + ^^̃ ^ ^ = REFERENCE NO: TPI-079PCT US PATENT APPLICATION where the receiver position is ^^^ ^^^ ^ and ^ ^ ^ = [^^ ^ ^^ ^ ^^ ^]^ is the position of the ^^^ satellite at the corrected transmission time but seen in the ECEF frame at the corrected reception time of the signal. [00327] The part of the measurement model for the pseudorange rates is: ^ ^,^^^ ^ 1 ^ ^,^ .(^^,^ ^ ^ ^,^ )+ 1 ^ ^ ^,^ .(^^,^ ^^,^ )+ 1 ^ ^ ^ ^,^ .(^^,^ ^^,^ )+ ^^,^ + ^^ ^ ^ = ^ , ^ , ^ = ^^^, ^^, ^^^ is the true receiver velocity in the ECEF frame and thus: ^^,^ ^^,^ ^^^ ^^ ^^^ ^ ^^^ ^ ^^^ ^ ^^^ ^ ^^,^ ^ ^,^ ^ = ^ ^,^^ ^ ^ ^ ^ ^ ^^ ^ ^^,^ ^ = ^ ^^^ ^^ ^^^ ^^ ^^^ ^^ ^^^ ^^ ^^^ ^ ^ ^ ^^,^ ^ is expressed as follows: [( ^ ^ ^), (^ ^) ( ^)]^ ^ ^ ^ ^ ^^ , ^^ ^^ ^ ^ ^ ^ 1^ = ^ = ^1 )^ ( ^)^ ^ ^ ^,^ , 1^,^ , 1^,^ ^ ^(^^ ^^ + ^^ ^^ + (^^ ^^ ) where the receiver position is as defined above. Hybrid Loosely/Tightly Coupled Scheme [00329] The proposed navigation module may adopt a hybrid loosely/tightly coupled solution as explained above for error-state system model integration for the reasons discussed above. 3.7 Additional Optional Modules [00330] Still further aspects of this disclosure relate to optional operations or REFERENCE NO: TPI-079PCT US PATENT APPLICATION modules that can be utilized with the techniques described above. The following section gives non-limiting examples of some of these options. 3.7.1 Perception-Based Relative Pose Estimator (Object Flow Analysis) [00331] In this section, a system that uses perception sensor measurements to estimate the relative change in vehicle’s pose is proposed. The output of this module may be fused with the inertial measurements and GNSS measurements to estimate more accurate states. Error! Reference source not found.One possible flow chart of the proposed system is depicted in FIG.22. [00332] The first step of this system is a block that filters out all moving objects from the perception sensor frame. A perception sensor frame as a full sample of the field of view is defined. The filtered perception sensor frame consists of the centroids of static objects detected by the perception sensor. The second step is to link the same objects between frames to one another based on different measurement characteristics like Perception Cross Section (PCS) and Reflected Power of each object in the frame. [00333] The second block is the Object Flow Tracker step which tracks objects in the present frame denoted by ^^^ compared to objects in previous frame denoted by ^^^^^. This is the association/correspondence step that tags objects that exist in ^^^^^ but not in ^^^ as objects exiting the frame. In addition, the association step should also tag objects that exist in ^^^ frame but not in ^^^^^ as objects that are entering the present frame and may be used to help in estimating the vehicle’s next pose. Finally, only objects that are present in ^^^ and ^^^^^ may be tagged as objects relevant to the relative pose estimation at time ^. [00334] After resolving the association problem between objects in the previous and current frame, the next step is to compute the magnitude and direction of the movement of static objects in the current frame relative to the previous frame. The Relative Pose Estimation step estimates the change in the vehicle’s pose between the two frames (i.e., change in vehicle’s pose within time [^, ^ 1]) based on the information about the movement of static objects in the present frame relative the previous frame. 3.7.2 Perception-Doppler Based Relative Pose Estimation REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00335] Some perception sensors have an advantage of providing doppler information of the surrounding objects over other perception sensors. An approach using doppler information relative to static objects to estimate the relative pose change between two perception sensor frames is proposed. A perception sensor frame is defined as a full sample of dopplers across the field of view of the perception sensor. The first step is to filter all moving targets (like vehicles and pedestrians) using doppler information and the odometer speed of the vehicle. The output of the first step is a frame containing all static detected objects and associated with each object is the doppler relative to the host vehicle. [00336] The next step utilizes the relative doppler of the static objects in the current frame to estimate the change in the vehicle’s states. Hence, the Relative Pose Estimation step estimates the change in the vehicle’s pose between the two frames (i.e., change in vehicle’s pose within time [^, ^ 1]) based on the information about the relative doppler of static objects in the present frame. 3.7.3 Augmenting Perception Sensor Signature Map with Reflected Power [00337] Using perception sensor measurements, a mapping vehicle is used to build an occupancy grid or a 3D model of the world (i.e., online map). Measurements are associated with an accurate position of the vehicle. As discussed above, this map can be used for localization purposes during a concurrent navigation session, for example by finding the best match between the local map created by a vehicle’s perception sensor and the online map used as a reference map. The best match is associated with a position, and this position is inferred to be the current position of the vehicle. On many occasions, multiple good matches may be found, however, there is no way to determine which one is the best match (for example, if the correlation indicator between the online map and the local map is very close for several matches). [00338] To resolve the issue of multiple matches, the reflected power with each measurement from the perception sensor in the mapping process is recorded. In other words, when the map is built, the range, elevation, azimuth and the accurate state of the vehicle are recorded alongside with the reflected power from the object as information stored in the map (this reflected power is unique to the state of the vehicle). Here, the method takes advantage of the fact that different materials in the environment REFERENCE NO: TPI-079PCT US PATENT APPLICATION surrounding the vehicle, will reflect the RF signals with different magnitude in the direction of the perception sensor. For example, metals reflect most of the power from the perception sensor, while wood hardly reflects anything. Moreover, the orientation of a metal object with respect to the perception sensor RF signal also contributes to how much power is reflected in the direction of the perception sensor. Therefore, the reflected power can be used as a third or fourth dimension for 2D or 3D occupancy grids respectively. By using reflected power, the ambiguity can be reduced as to what the state of the vehicle should be (this reduces the possibility of generating many good matches between the local map and the reference map). 3.7.4 Perception-Based Simultaneous Localization and Mapping [00339] Further, a perception-based Simultaneous Localization and Mapping (SLAM) process can be incorporated with the techniques of this disclosure. As will be appreciated, SLAM involves determining position information for the platform while building a map. It is assumed that a map is not provided and hence the absolute position of objects is unknown. Thus, the state of the platform and the map which contains an object list are unknown. A practical solution to this problem is to utilize Graph-SLAM algorithms to estimate the state of the platform and build the map as a list of landmarks. In a Graph based SLAM, the nodes in the graph represent the different states of the platform and the links between the nodes depict the constraints set by the motion sensors. Moreover, landmarks are also represented as nodes, and the link between landmarks and the platform’s state node at time ^ represents the distance constraints. The SLAM equations are solved by finding the optimal state of nodes (i.e., position, attitude), such that all constraints are met. There are also several techniques utilized by Graph-SLAM to incorporate the uncertainty of the motion model and the measurement model into the SLAM equations. It is also assumed that the initial state of the platform is known. Moreover, Online Graph-SLAM can be used to reduce the complexity of the graph by removing older states and keeping only one node to represent the current state of the vehicle. [00340] The perception sensor data along with the object detection module can be used to estimate the distance to the centroid of the objects within the perception sensor’s field of view. Motion of detected objects can be used to predict the next state of the vehicle, hence, the Euclidian distance between the current state and the previous state REFERENCE NO: TPI-079PCT US PATENT APPLICATION can be used to link both nodes in the graph. The uncertainty of the motion model can be propagated to the Euclidian distance computation and used to modulate the importance of links between poses in the graph. Moreover, the measurement model uncertainty can be propagated to the distance between the landmark and the current state and used to modulate the importance of links between the current states and detected landmarks. CONTEMPLATED EMBODIMENTS [00341] The present disclosure describes the body frame to be x forward, y positive towards right side of the body and z axis positive downwards. It is contemplated that any body-frame definition can be used for the application of the method and apparatus described herein. [00342] It is contemplated that the techniques of this disclosure can be used with a navigation solution that may optionally utilize automatic zero velocity periods or static period detection with its possible updates and inertial sensors bias recalculations, non- holonomic updates module, advanced modeling and/or calibration of inertial sensors errors, derivation of possible measurements updates for them from GNSS when appropriate, automatic assessment of GNSS solution quality and detecting degraded performance, automatic switching between loosely and tightly coupled integration schemes, assessment of each visible GNSS satellite when in tightly coupled mode, and finally possibly can be used with a backward smoothing module with any type of backward smoothing technique and either running in post mission or in the background on buffered data within the same mission. [00343] It is further contemplated that techniques of this disclosure can also be used with a mode of conveyance technique or a motion mode detection technique to establish the mode of conveyance. This enables the detection of pedestrian mode among other modes such as for example driving mode. When pedestrian mode is detected, the method presented in this disclosure can be made operational to determine the misalignment between the device and the pedestrian. [00344] It is further contemplated that techniques of this disclosure can also be used with a navigation solution that is further programmed to run, in the background, a routine to simulate artificial outages in the absolute navigational information and estimate the parameters of another instance of the state estimation technique used for REFERENCE NO: TPI-079PCT US PATENT APPLICATION the solution in the present navigation module to optimize the accuracy and the consistency of the solution. The accuracy and consistency is assessed by comparing the temporary background solution during the simulated outages to a reference solution. The reference solution may be one of the following examples: the absolute navigational information (e.g. GNSS); the forward integrated navigation solution in the device integrating the available sensors with the absolute navigational information (e.g. GNSS) and possibly with the optional speed or velocity readings; or a backward smoothed integrated navigation solution integrating the available sensors with the absolute navigational information (e.g. GNSS) and possibly with the optional speed or velocity readings. The background processing can run either on the same processor as the forward solution processing or on another processor that can communicate with the first processor and can read the saved data from a shared location. The outcome of the background processing solution can benefit the real-time navigation solution in its future run (i.e. real-time run after the background routine has finished running), for example, by having improved values for the parameters of the forward state estimation technique used for navigation in the present module. [00345] It is further contemplated that the techniques of this disclosure can also be used with a navigation solution that is further integrated with maps (such as street maps, indoor maps or models, or any other environment map or model in cases of applications that have such maps or models available) in addition to the different core use of map information discussed above, and a map matching or model matching routine. Map matching or model matching can further enhance the navigation solution during the absolute navigational information (such as GNSS) degradation or interruption. In the case of model matching, a sensor or a group of sensors that acquire information about the environment can be used such as, for example, Laser range finders, cameras and vision systems, or sonar systems. These new systems can be used either as an extra help to enhance the accuracy of the navigation solution during the absolute navigational information problems (degradation or absence), or they can totally replace the absolute navigational information in some applications. [00346] It is further contemplated that the techniques of this disclosure can also be used with a navigation solution that, when working either in a tightly coupled scheme or a hybrid loosely/tightly coupled option, need not be bound to utilize REFERENCE NO: TPI-079PCT US PATENT APPLICATION pseudorange measurements (which are calculated from the code not the carrier phase, thus they are called code-based pseudoranges) and the Doppler measurements (used to get the pseudorange rates). The carrier phase measurement of the GNSS receiver can be used as well, for example: (i) as an alternate way to calculate ranges instead of the code- based pseudoranges, or (ii) to enhance the range calculation by incorporating information from both code-based pseudorange and carrier-phase measurements; such enhancement is the carrier-smoothed pseudorange. [00347] It is further contemplated that the techniques of this disclosure can also be used with a navigation solution that relies on an ultra-tight integration scheme between GNSS receiver and the other sensors’ readings. [00348] It is further contemplated that the techniques of this disclosure can also be used with a navigation solution that uses various wireless communication systems that can also be used for positioning and navigation either as an additional aid (which will be more beneficial when GNSS is unavailable) or as a substitute for the GNSS information (e.g. for applications where GNSS is not applicable). Examples of these wireless communication systems used for positioning are, such as, those provided by cellular phone towers and signals, radio signals, digital television signals, WiFi, or Wimax. For example, for cellular phone based applications, an absolute coordinate from cell phone towers and the ranges between the indoor user and the towers may be utilized for positioning, whereby the range might be estimated by different methods among which calculating the time of arrival or the time difference of arrival of the closest cell phone positioning coordinates. A method known as Enhanced Observed Time Difference (E-OTD) can be used to get the known coordinates and range. The standard deviation for the range measurements may depend upon the type of oscillator used in the cell phone, and cell tower timing equipment and the transmission losses. WiFi positioning can be done in a variety of ways that includes but is not limited to time of arrival, time difference of arrival, angles of arrival, received signal strength, and fingerprinting techniques, among others; all of the methods provide different level of accuracies. The wireless communication system used for positioning may use different techniques for modeling the errors in the ranging, angles, or signal strength from wireless signals, and may use different multipath mitigation techniques. All the above mentioned ideas, among others, are also applicable in a similar manner for other REFERENCE NO: TPI-079PCT US PATENT APPLICATION wireless positioning techniques based on wireless communications systems. [00349] It is further contemplated that the techniques of this disclosure can also be used with a navigation solution that utilizes aiding information from other moving devices. This aiding information can be used as additional aid (that will be more beneficial when GNSS is unavailable) or as a substitute for the GNSS information (e.g. for applications where GNSS based positioning is not applicable). One example of aiding information from other devices may be relying on wireless communication systems between different devices. The underlying idea is that the devices that have better positioning or navigation solution (for example having GNSS with good availability,accuracy or other aspects indicative of GNSS quality) can help the devices with degraded or unavailable GNSS to get an improved positioning or navigation solution. This help relies on the well-known position of the aiding device(s) and the wireless communication system for positioning the device(s) with degraded or unavailable GNSS. This contemplated variant refers to the one or both circumstance(s) where: (i) the device(s) with degraded or unavailable GNSS utilize the methods described herein and get aiding from other devices and communication system, (ii) the aiding device with GNSS available and thus a good navigation solution utilize the methods described herein. The wireless communication system used for positioning may rely on different communication protocols, and it may rely on different methods, such as for example, time of arrival, time difference of arrival, angles of arrival, and received signal strength, among others. The wireless communication system used for positioning may use different techniques for modeling the errors in the ranging and/or angles from wireless signals, and may use different multipath mitigation techniques. [00350] The embodiments and techniques described above may be implemented in software as various interconnected functional blocks or distinct software modules. This is not necessary, however, and there may be cases where these functional blocks or modules are equivalently aggregated into a single logic device, program or operation with unclear boundaries. In any event, the functional blocks and software modules implementing the embodiments described above, or features of the interface can be implemented by themselves, or in combination with other operations in either hardware or software, either within the device entirely, or in conjunction with the device and other processer enabled devices in communication with the device, such as a server. REFERENCE NO: TPI-079PCT US PATENT APPLICATION [00351] Although a few embodiments have been shown and described, it will be appreciated by those skilled in the art that various changes and modifications can be made to these embodiments without changing or departing from their scope, intent or functionality. The terms and expressions used in the preceding specification have been used herein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalents of the features shown and described or portions thereof, it being recognized that the disclosure is defined and limited only by the claims that follow.

Claims

REFERENCE NO: TPI-079PCT US PATENT APPLICATION METHOD AND SYSTEM FOR MAP BUILDING USING PERCEPTION AND MOTION SENSORS CLAIMS What is claimed is: 1. A method for providing an integrated navigation solution in real-time for a device within a moving platform, the method comprising: a) obtaining motion sensor data from a sensor assembly of the device, b) obtaining perception sensor data from at least one perception sensor for the platform; c) generating an integrated navigation solution for the platform based at least in part on the obtained motion sensor data; d) building an online map for an area encompassing the platform in a first instance of time using perception sensor data based at least in part on the integrated navigation solution during the first instance of time; f) revising the integrated navigation solution in a second instance of time based at least in part on the motion sensor data using a nonlinear state estimation technique, wherein a prediction phase involving a system model is used to propagate predictions about a state of the platform and an update phase involving at least one measurement model relating measurements to the state is used to update the state of the platform, wherein the nonlinear state estimation technique comprises using a nonlinear measurement model for perception sensor data, wherein integrating the motion sensor data and perception sensor data in the nonlinear state estimation technique is tightly- coupled, and wherein the revising comprises: i) using the obtained motion sensor data in the nonlinear state estimation technique; and ii) integrating perception sensor data directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the online map information; and e) providing: i) the revised integrated navigation solution when available; and ii) the integrated navigation solution when the revised integrated solution is not available. REFERENCE NO: TPI-079PCT US PATENT APPLICATION 2. The method of claim 1, wherein the at least one perception sensor is at least one of radar, an optical camera, lidar, a thermal camera, an IR camera and an ultrasonic sensor. 3. The method of claim 1, wherein the at least one perception sensor is at least one radar that outputs radar measurements for the platform. 4. The method of claim 1, wherein the at least one perception sensor is at least one optical sensor that outputs optical samples for the platform. 5. The method of claim 1, wherein the at least one perception sensor comprises at least one radar that outputs radar measurements for the platform and at least one optical sensor that outputs optical samples for the platform. 6. The method of claim 4 or 5, further comprising determining depth information for objects detected within the optical samples by at least one of: i) estimating depth for an object and deriving range, bearing and elevation; and ii) obtaining depth readings for an object from the at least one optical sensor and deriving range, bearing and elevation. 7. The method of claim 6, further comprising performing a scene reconstruction operation for a local area surrounding the platform based at least in part on the determined depth information for objects detected within the optical samples. 8. The method of claim 1, wherein the at least one perception sensor comprises at least two types of perception sensors and wherein building the online map comprises using one type of perception sensor and wherein integrating the motion sensor data comprises integrating data from another type of perception sensor directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the data from the other type of perception sensor in the nonlinear state estimation technique. 9. The method of claim 1, wherein the building an online map for an area REFERENCE NO: TPI-079PCT US PATENT APPLICATION encompassing the platform is performed based at least in part on satisfaction of a favorable condition. 10. The method of claim 1, wherein the measurement model comprises at least one of: i) a range-based model based at least in part on a probability distribution of measured ranges using an estimated state of the platform and the online map; ii) a nearest object likelihood model based at least in part on a probability distribution of distance to an object detected using the optical samples, an estimated state of the platform and a nearest object identification from the online map; iii) a map matching model based at least in part on a probability distribution derived by correlating the online map to a local map generated using the optical samples and an estimated state of the platform; and iv) a closed-form model based at least in part on a relation between an estimated state of the platform and ranges to objects from the online map. 11. The method of claim 1 or 10, wherein the nonlinear measurement model further comprises models for optical sensor-related or radar-related errors comprising any one or any combination of environmental errors, sensor-based errors and dynamic errors. 12. The method of claim 1 or 10, wherein the nonlinear measurement model is configured to handle errors in the map information. 13. The method of claim 1, wherein the nonlinear state estimation technique comprises at least one of: i) an error-state system model; ii) a total-state system model, wherein the integrated navigation solution is output directly by the total-state model; and iii) a system model receiving input from an additional state estimation technique that integrates the motion sensor data. 14. The method of claim 13, wherein the nonlinear state estimation REFERENCE NO: TPI-079PCT US PATENT APPLICATION technique comprises an error-state system model and wherein providing the integrated navigation solution comprises correcting an inertial mechanization output with the updated nonlinear state estimation technique. 15. The method of claim 13, wherein the system model comprises a system model receiving input from an additional state estimation technique, and wherein the additional state estimation technique integrates any one or any combination of: i) inertial sensor data; ii) odometer or means for obtaining platform speed data; iii) pressure sensor data; iv) magnetometer data; and v) absolute navigational information. 16. The method of claim 14, wherein the system model of the nonlinear state estimation technique further comprises a motion sensor error model. 17. The method of claim 1, wherein the nonlinear state estimation technique comprises at least one of: i) a Particle Filter (PF); ii) a PF, wherein the PF comprises a Sampling/Importance Resampling (SIR) PF; and iii) a PF, wherein the PF comprises a Mixture PF. 18. The method of claim 1, further comprising integrating a source of absolute navigational information with the integrated navigation solution. 19. The method of claim 1, further comprising integrating a source of absolute navigational information with the integrated navigation solution, wherein building the online map for an area encompassing the platform is performed based at least in part on quality of the absolute navigational information. 20. The method of claim 1, further comprising storing and retrieving the online map based on a current position of the platform. REFERENCE NO: TPI-079PCT US PATENT APPLICATION 21. The method of claim 1, further comprising determining a misalignment between a frame of the sensor assembly and a frame of the platform, wherein the misalignment is at least one of: i) a mounting misalignment; and ii) a varying misalignment. 22. The method of claim 1, further comprising determining a misalignment between a frame of the sensor assembly and a frame of the platform, wherein the misalignment is determined using any one or any combination of: i) a source of absolute velocity; ii) a radius of rotation calculated from the motion sensor data; and iii) leveled horizontal components of acceleration readings along forward and lateral axes from the motion sensor data. 23. A system for providing an integrated navigation solution in real-time for a device within a moving platform, comprising: a device having a sensor assembly configured to output motion sensor data; at least one perception sensor providing perception sensor data; and at least one processor, coupled to receive the motion sensor data, the perception sensor data, and operative to: A) generate an integrated navigation solution for the platform, based at least in part on the motion sensor data; B) build an online map for an area encompassing the platform in a first instance of time using perception sensor data based at least in part on the integrated navigation solution during the first instance of time; C) revise the integrated navigation solution in a second instance of time based at least in part on the motion sensor data using a nonlinear state estimation technique, wherein a prediction phase involving a system model is used to propagate predictions about a state of the platform and an update phase involving at least one measurement model relating measurements to the state is used to update the state of the platform, wherein the nonlinear state estimation technique comprises REFERENCE NO: TPI-079PCT US PATENT APPLICATION using a nonlinear measurement model for perception sensor data, wherein integrating the motion sensor data and perception sensor data in the nonlinear state estimation technique is tightly-coupled, and wherein the revising comprises: i) using the received motion sensor data in the nonlinear state estimation technique; and ii) integrating perception sensor data directly by updating the nonlinear state estimation technique using the nonlinear measurement models and the online map information; and D) provide: i) the revised integrated navigation solution when available; and ii) the integrated navigation solution when the revised integrated solution is not available. 24. The system of claim 23, wherein the at least one perception sensor is at least one of radar, an optical camera, lidar, a thermal camera, an IR camera and an ultrasonic sensor. 25. The system of claim 23, wherein the at least one perception sensor is at least one radar that outputs radar measurements for the platform. 26. The system of claim 19, wherein the at least one perception sensor is at least one optical sensor that outputs optical samples for the platform. 27. The system of claim 23, wherein the at least one perception sensor comprises at least one radar that outputs radar measurements for the platform and at least one optical sensor that outputs optical samples for the platform. 28. The system of claim 23, wherein the at least one perception sensor comprises at least two types of perception sensors and wherein building the online map comprises using one type of perception sensor and wherein integrating the motion sensor data comprises integrating data from another type of perception sensor directly REFERENCE NO: TPI-079PCT US PATENT APPLICATION by updating the nonlinear state estimation technique using the nonlinear measurement models and the data from the other type of perception sensor in the nonlinear state estimation technique. 29. The system of claim 23, wherein the measurement model comprises at least one of: i) a range-based model based at least in part on a probability distribution of measured ranges using an estimated state of the platform and the online map; ii) a nearest object likelihood model based at least in part on a probability distribution of distance to an object detected using the optical samples, an estimated state of the platform and a nearest object identification from the online map; iii) a map matching model based at least in part on a probability distribution derived by correlating the online map to a local map generated using the optical samples and an estimated state of the platform; and iv) a closed-form model based at least in part on a relation between an estimated state of the platform and ranges to objects from the online map. 30. The system of claim 23, wherein the nonlinear state estimation technique comprises at least one of: i) an error-state system model; ii) a total-state system model, wherein the integrated navigation solution is output directly by the total-state model; and iii) a system model receiving input from an additional state estimation technique that integrates the motion sensor data. 31. The system of claim 23, wherein the sensor assembly includes an accelerometer and a gyroscope. 32. The system of claim 31, wherein the sensor assembly is implemented as a Micro Electro Mechanical System (MEMS). 33. The system of claim 23, further comprising a source of absolute REFERENCE NO: TPI-079PCT US PATENT APPLICATION navigational information. 34. The system of claim 23, further comprising any one or any combination A) an odometer or means for obtaining platform speed; B) a pressure sensor; C) a magnetometer.
PCT/US2025/021747 2024-03-27 2025-03-27 Method and system for map building using perception and motion sensors Pending WO2025207878A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202463570752P 2024-03-27 2024-03-27
US63/570,752 2024-03-27
US202519091403A 2025-03-26 2025-03-26
US19/091,403 2025-03-26

Publications (1)

Publication Number Publication Date
WO2025207878A1 true WO2025207878A1 (en) 2025-10-02

Family

ID=95450216

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/021747 Pending WO2025207878A1 (en) 2024-03-27 2025-03-27 Method and system for map building using perception and motion sensors

Country Status (1)

Country Link
WO (1) WO2025207878A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8250921B2 (en) 2007-07-06 2012-08-28 Invensense, Inc. Integrated motion processing unit (MPU) with MEMS inertial sensing and embedded digital electronics
US8952832B2 (en) 2008-01-18 2015-02-10 Invensense, Inc. Interfacing application programs and motion sensors of a device
US9797727B2 (en) 2013-09-16 2017-10-24 Invensense, Inc. Method and apparatus for determination of misalignment between device and vessel using acceleration/deceleration
US10274317B2 (en) 2013-09-16 2019-04-30 Invensense, Inc. Method and apparatus for determination of misalignment between device and vessel using radius of rotation
US20200039523A1 (en) * 2018-07-31 2020-02-06 Nio Usa, Inc. Vehicle control system using nonlinear dynamic model states and steering offset estimation
US20220107184A1 (en) * 2020-08-13 2022-04-07 Invensense, Inc. Method and system for positioning using optical sensor and motion sensors
US11422253B2 (en) 2018-11-19 2022-08-23 Tdk Corportation Method and system for positioning using tightly coupled radar, motion sensors and map information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8250921B2 (en) 2007-07-06 2012-08-28 Invensense, Inc. Integrated motion processing unit (MPU) with MEMS inertial sensing and embedded digital electronics
US8952832B2 (en) 2008-01-18 2015-02-10 Invensense, Inc. Interfacing application programs and motion sensors of a device
US9797727B2 (en) 2013-09-16 2017-10-24 Invensense, Inc. Method and apparatus for determination of misalignment between device and vessel using acceleration/deceleration
US10274317B2 (en) 2013-09-16 2019-04-30 Invensense, Inc. Method and apparatus for determination of misalignment between device and vessel using radius of rotation
US20200039523A1 (en) * 2018-07-31 2020-02-06 Nio Usa, Inc. Vehicle control system using nonlinear dynamic model states and steering offset estimation
US11422253B2 (en) 2018-11-19 2022-08-23 Tdk Corportation Method and system for positioning using tightly coupled radar, motion sensors and map information
US20220107184A1 (en) * 2020-08-13 2022-04-07 Invensense, Inc. Method and system for positioning using optical sensor and motion sensors
US11875519B2 (en) 2020-08-13 2024-01-16 Medhat Omr Method and system for positioning using optical sensor and motion sensors

Similar Documents

Publication Publication Date Title
US11506512B2 (en) Method and system using tightly coupled radar positioning to improve map performance
US11875519B2 (en) Method and system for positioning using optical sensor and motion sensors
US11422253B2 (en) Method and system for positioning using tightly coupled radar, motion sensors and map information
El-Sheimy et al. Indoor navigation: State of the art and future trends
US10281279B2 (en) Method and system for global shape matching a trajectory
US10365363B2 (en) Mobile localization using sparse time-of-flight ranges and dead reckoning
Hsu et al. Hong Kong UrbanNav: An open-source multisensory dataset for benchmarking urban navigation algorithms
EP2133662B1 (en) Methods and system of navigation using terrain features
US20170023659A1 (en) Adaptive positioning system
WO2019092418A1 (en) Method of computer vision based localisation and navigation and system for performing the same
CN115343745B (en) High-precision satellite positioning method assisted by three-dimensional laser radar
CN113406682A (en) Positioning method, positioning device, electronic equipment and storage medium
US20220049961A1 (en) Method and system for radar-based odometry
US20250305851A1 (en) Method and system for map building using radar and motion sensors
JP2016080460A (en) Moving body
WO2016196717A2 (en) Mobile localization using sparse time-of-flight ranges and dead reckoning
Wang et al. UGV‐UAV robust cooperative positioning algorithm with object detection
Aggarwal GPS-based localization of autonomous vehicles
US20240302183A1 (en) Method and system for crowdsourced creation of magnetic map
Zhou et al. Localization for unmanned vehicle
EP4166989A1 (en) Methods and systems for determining a position and an acceleration of a vehicle
WO2025207878A1 (en) Method and system for map building using perception and motion sensors
Wei Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework
Abdellattif Multi-sensor fusion of automotive radar and onboard motion sensors for seamless land vehicle positioning in challenging environments
EP4196747A1 (en) Method and system for positioning using optical sensor and motion sensors