- How unmanned aircraft systems, GIS and time-based 3D visualization will support spatial systems of the future -
Spatial systems have permeated the 21st century. From high-fidelity linear referencing systems that facilitate the planning of highways, to “checking in” with foursquare, spatial systems, whether we are fully aware of it or not, underpin our daily lives.
To spatial experts, this adoption is the natural progression of geographic information system (GIS) technology, which has helped many industries become more effective in their planning, operations and communications over the past 50 years. Now that the general public has become so accustomed to being spatially aware, they have greater expectations.
These expectations are even more pronounced within the global defense community, which uses spatial information for critical intelligence, surveillance and reconnaissance (ISR) activities. Meeting these expectations will hinge upon increased data accuracy, up-to-date spatial data and increased analytic processing.
And, as data becomes more accurate and more frequently updated, the visualization of this data is going to have to improve. This data will have to be consumed with acute regard for space and time in order to properly understand the context of the data. But how do we get there?
Historically, GIS was the domain of the engineering community looking to model Earth-based activities. As computer hardware and GIS software increased in performance, answers to GIS problems started to be answered faster. This trend of fast GIS continued and then boomed when married with authoritative content and spatially aware devices late in the first decade of the 21st century.
GIS enabled potential home buyers to look at their home from space or from a street-level camera to case out the neighborhood before they even contacted a real estate agent. Travelers don’t have to do any heavy trip planning before they leave on their journey; GPS devices will take them anywhere they want to go, in real time. City planners can bring up cityscapes to visualize proposed changes to a park.
Much of today’s spatial success is a result of GIS adopting web-friendly protocols—intelligent maps can now be delivered to desktop and handheld devices alike. Also, organizing spatial data into highly indexed data structures has been very helpful in the adoption of spatial systems.
This enables users to retrieve their data requests quickly. Spatial data is constantly being created through locatable devices and remote-sensing capability, giving us all a sub-meter “God’s eye” perspective. But this success comes with a cost and a challenge for the future. Now that the populous has developed an appetite for spatial content, how are the current and future systems going to grow with the demand, maintain the desired rich content and deliver it in a way that is much more easily consumed?
Content - Much of the content running today’s systems is already stretched to the limits. Spatial content in many cases is expensive to collect, difficult to process and heavy to disseminate. New technologies such as unmanned aircraft systems (UAS) are one way to cheaply and efficiently collect data. UAS have combined the key advantages of the previous workhorses of the imagery industry: manned aircraft and satellites.
Like manned aircraft, UAS are extremely flexible in imaging sensor payloads and mission location, and, like satellites, they require minimal operator input to execute their mission. This capability is attractive to many industries. The United States Congress recently mandated that the FAA open up the national airspace to unmanned aircraft by September 30, 2015.
With access to the national airspace, UAS will transition from tools used exclusively by the Department of Defense to a ubiquitous solution used by such industries as pipeline survey, firefighting, law enforcement, agriculture and oil and gas.
Although UAS will increase the ability of these industries to collect more data, a new problem will be created … what to do with the massive amount of data collected? This problem is already one being experienced by the intelligence community, and has been recognized as one that cannot be solved with increases in manpower alone.
Increasing collection data would burden the processing systems that are required to turn the data into information. This means that collection systems will have to be smarter and progress from the current brute force methodology. One way to employ a smarter collection strategy is to only collect and store out what you need instead of simply collecting and storing everything.
Filtering for authoritative content at the sensor level will help in the transmission of the collection data since the platform is going to be transmitting much less data. This will also help out with the processing of the data, since it’s going to be processing a lot less data as well.
As platforms that host sensors become more mature, processing space on the platform will have to mature as well. Software processes that run real-time filters on what gets sent and what gets thrown away will dramatically affect the efficiency of systems that are sensor dependent. This filtering process can happen by attribute, geometry or time. Attribute filters are relatively straightforward.
You simply throw away the data that doesn’t match your look up. Geometric and temporal filters need to have much more of a brain and also awareness of where the platform is and what it is sensing. This means that telemetry (GPS, accelerometer and pointing) information needs to be accurate and clean. Geometric and temporal filters will be able to filter sensor data in real-time to account for azimuth, elevation, range, quality, bandwidth, environmental conditions, time of day,
Sun lighting or even weather conditions. These filters will supply the downstream systems much more useful data as opposed to the fire hose they currently receive. This filtering process helps out with the “big data” challenges of today. Having irrelevant data ignored before it takes up space and causes a system to deal with it downstream positively affects broader system performance. This filtering will greatly reduce total data load on a system, and enable valuable datasets to be exposed.
As these automated collection systems come online and start creating richer content, new and innovative tools will be expected to exploit this data. AGI’s Systems Tool Kit (STK) software is one such tool being increasingly relied upon for UAS video and imagery exploitation. AGI software facilitates targeted searches of catalogued video and imagery by increasing the searchable attributes to include geometric, lighting and temporal conditions. It then returns the precise times where video and imagery matches specified requirements.
It can also model specific parameters such as Sun elevation angle and azimuth; sensor geometry; target elevation angle and azimuth; time; and terrain (natural or urban). The result is that those using UAS for intelligence get the precise data they need, faster.
Processing -“The cloud” has been a buzzword for many years now. However, many people view the cloud as just an online server. This view obfuscates the full potential for cloud-based systems. Cloudy systems enable interoperability and scalability, both of which are abstracted from the end user. The system is present, available and ready to answer the smallest to largest question with the same relative ease.
In order to provide this scalable processing, the common software paradigm of serial processing does not suffice. Implementing a modern parallel processing approach, such as using a MapReduce programming model, creates a much more functional use. MapReduce allows multiple processes to be executed at the exact same time using different variables and potentially different datasets.
This enables the system to have a single interface into the execution and resultant datasets. Using parallel processing methodologies, not only can questions be answered much faster, but previously unanswerable questions are now answerable.
With the release of STK version 10 in November 2012, AGI has introduced a parallel processing product that lets you group calculations into sets, simultaneously executing the sets on separate CPU cores and reassembling the results for faster presentation and analysis. How much faster? Well, for problems with many grid points, lots of mission assets and/or long analysis periods, parallel processing can reduce computation time by a factor approaching N, where N is equal to the number of CPU cores tasked. The bigger the problem, the larger the acceleration factor.
The other importance of the cloud is availability and interoperability. These two concepts simply state that the application interface is available and known. High-server availability has been fairly commonplace for years. Discoverable and defined web interfaces have been a challenge.
Traditional GIS has done a tremendous job of defining spatial products. However, current-day GIS breaks down with time-sensitive entities and is also challenged with entities that are defined within non-Earth center fixed (ECF) coordinate frames. KML’s addition of a timestamp and Web Feature Service 2.0’s time attribute are awesome steps in the right direction, but the end user is still required to actively consider it in any analytic or visualization operations.
There is no passive coordination of time, let alone any interpolation of the position based on time. STK requires all analytic objects be time aware, and can derive position based on time. This internal clock abstracts this concern from the end user.
With STK 10, geospatial data interoperability is greatly improved. Users can drag and drop GIS data products right into AGI’s globe. They can analytically interoperate with feature data as well. For example, you can bring in a polyline that represents a trajectory by simply right-clicking and promoting that feature to an STK aircraft object.
An ArcGIS REST/WMS/WMTS user interface plugin enables any STK client to import a user-defined basemap. The interoperability is starting to bridge the gap between traditional GIS and space-time systems such as STK.
Spatial systems have started to scratch the surface of cloud processing.
Spatial data being served up is a great use of this technology, but as more time-sensitive content comes online, highly transactional services are going to be required. Defining those service interfaces to be as useful to end users as possible will be a challenge. AGI’s new cloud-based server, the STK Data Federate, is hosted as a service to users. Not only can you search and retrieve standard STK objects such as satellites, facilities and aircraft; but you can download scenarios of standard space systems that are ready to be customized for your particular analysis.
Don’t know much about satellites, but need to do RF analysis on the Intelsat network? No problem. Just download the Intelsat scenario, and you’ll get all of the satellite and ground stations ready to go in an STK scenario. Once you tailor the scenario to your needs, you can simply save the scenario (with version control) back on AGI’s server with your own private account.
Visualization - Visualizing highly accurate, time-sensitive spatial data is very important to gaining a better understanding. As better data comes online, having visualization tools that emphasize the space and time details of the information is critical. In music, playing the right note at the wrong time is still the wrong note. The same can be said with spatial visualization tools.
Displaying data at the right location at the wrong time is wrong and, in some circumstances, can be disastrous. Add additional variables such as altitude and orientation to mapped entities, and you can quickly see how important high fidelity visualization is.
Current spatial visualization tools do a great job of showing points, lines and polygons on a map or 2.5D globe. In many cases, simply showing dots on a map does the job. However, as richer data sources come online, showing the data in true 3D with respect to time will be required.
AGI’s 3D globe uses the actual time standard as its foundation—leap second and all—to operate in simulated historical time, simulated future time or, of course, in real time. If it’s a fixed location on the ground, time isn’t so critical.
But if something is moving around—you need to manage time correctly. If you don’t have time modeled correctly, you don’t have a basis for your coordinate system. Because AGI came from space, this is a foundational element of the software; determining accurate position for spacecraft orbiting a deep space asteroid does not leave much room for error. That fidelity of position and orientation is now available to all system elements—ground-, air- and space-based. AGI’s 3D globe was once part of its paid offerings. With STK 10, the capability is free.
The “information age” that began in the 1970s has brought us incredible (and immediate) access to vast amounts of data. Such an overwhelming amount of information creates both opportunities and challenges, especially to the defense community. Accurate, timely spatial data, along with better analytic processing and 3D visualization of it, is mission-critical. AGI has software tools, available today, which can help the global spatial system user community rise to the opportunities of today, while meeting the challenges of tomorrow.
By Todd Smith, Analytical Graphics, Inc. (AGI)
For more information: AGI