What Does Mobility Mean In The Age of The Self-Driving Car?

The Self-Driving Car or autonomous vehicle will have profound implications for consumer mobility. But does it really cover the spectrum of mobility as most of us now understand it?

It is expected that self-driving vehicles will take a long development time and require adjustment in regulations, laws, manufacturing orientation and new innovations in technology.  A recent Der Spiegel article entitled ‘The Electric Shock – E-Mobility Gets Existential for German Carmakers‘ describes the ins-and-outs of the German car industry with respect to electric cars and self-driving technology. There – it is expected that the production or shift to these technologies will take a long time as suppliers tackle issues ranging from technology to unions to consumer confidence and education.

But the broader question many ask is, “how do i get from A to B” – and for many that does not necessarily involve a vehicle. In a connected society, as we have been led to understand, the Internet of Things reigns and all things become connected. So to does mobility, all forms of transportation connect to form mobility – and we are not there yet.

Which Road To The Mobility Path?
The self-driving car sits atop a foundation of smart infrastructure development including smart cities, smart roads, smart homes and connected humans.

cars on expresswayOne might reasonably argue that rather than an autonomous vehicle being the main force giving rise to greater mobility, they are the result of a smart society that has truly connected infrastructure. They simply operate within it.

But consider this approach in terms of current rail networks. These are not, currently, very smart – although they are increasingly becoming connected.

  • most rail systems operate on schedules – a smart rail network would be able to automatically adjust networks to meet passenger demands.
  • most rail systems do have collision avoidance systems (not all) – a smart rail network would add sensors into rail infrastructure, individual trains and consider other elements like weather, slopes and time of day
  • most rail systems use dedicated tracks – a smart rail network would interface roads and marine shipping as well as route delivery services with clear line-of-sight pathways, introducing connected mobility across sectors.

In fact, makers of smart vehicles might reasonably wonder – if rail networks, given all their current investment and infrastructure, aren’t already smarter than what does that mean for self-driving vehicles?

Mobility Means What?
Many people walk out their door and use one, two or three different forms of mobility every day – added to walking. One might get on a bus, then a tram then walk. Or use an underground, bus then tram. Or take a car, bus, then a tram.

When auto makers talk about sitting atop the food chain in terms of mobility, that simply is not true. Autos will form one part of the mobility factor, sometimes dominating daily trips but not always for millions of people.  If the overall cost per kilometer to customers does not decrease self-driving costs sufficiently, then users are not likely to change use patterns. In the extreme, one might argue that more underground, which moves people quicker, is built and deployed then it might be considered a better investment then smart infrastructure on a cost per kiloometer basis.

Do Self-Driving Car Makers Have It Right?
There is little doubt that self-driving car technology and the impetus toward autonomous vehicles is a bold, innovative and exciting future. It will surely give rise to a shift in-car ownership and use, however, in terms of transport mobility, a more connected ‘mobility’ concept will cover a wider range of platforms for moving people and goods.

By comparison, do you want a ‘smart home’ if it is the only one on the block and does not connect to proper infrastructure?

The real question seems to be “are automakers willing to adjust their business models to reflect the true sense of mobility that technological innovation has generated through autonomous developments and so-called smart technologies?”

The auto maker you see today, may not be as successful tomorrow in a world of technological innovation that generates connectivity beyond narrow ranging verticals.

 

Review: LumenRT Delivers On 3D Infrastructure Visualization

I had the opportunity to download LumenRT from Bentley Systems recently and try it out. It did not take long for me to create compelling visualizations in 3D. The software is a step up from first generation software that has been commonly used in desktop applications. In this case, LumenRT CONNECT Edition operates from your desktop through the Cloud.

bentley-lumenrtWhat Is LumenRT?
It is a visualization software that adds highly realistic scenery, landscape, objects, materials and plants or trees to 3D environments. If you design in MicroStation, Revit, SketchUp or GeoDesign – then LumenRT can take your designs and spring them to photo-realistic life.  Bentley Systems says it can,

  • animate infrastructure models with elements in motion such as simulated traffic using vehicles of all types, moving people, wind-swept plants, breeze-animated and seasonal trees, rolling clouds, rippling water and much more,
  • easily generate attention-grabbing, cinematic-quality images and videos,
  • share interactive, immersive 3D presentations with any stakeholder using Bentley LumenRT LiveCubes,
  • Create Bentley LumenRT scenes directly from inside MicroStation, including V8i SELECTseries and CONNECT Edition, Autodesk Revit, Esri CityEngine, Graphisoft ArchiCAD, Trimble Sketchup and also import from many leading 3D exchange formats

Key Point In Using
Many visualization software are complex and have steep learning curves. We’ve previously worked with many of them and reviewed them, finding this true. Bentley Systems said that LumenRT is easy-to-use. I found this to be true. Within 15 minutes I was able to open two demos and move within them and add realistic trees from the library, a Ferrari from the library, change from metallic to concrete surfaces, adjust sunlight and lamp types, add a swimming area with waves and colors and many other features. Later I adjusted camera angles, created a motion animation and included animation features such as moving vehicles and people who move.  All of this ease is designed to take the hard work of visualization out of the design – visualization process, adding automation and simplicity into the workflow.

Why Is This Software Useful?
Generally, many design oriented users work with software that is highly oriented to creating 3D objects – lines, points and polygons. Being able to adjust, orient, alter and manipulate surfaces is key, whether one works in designing buildings, plant utilities, railways, transmission lines or almost any other thing.

In Bentley Systems case, the company is a leader in 3D design software for infrastructure life-cycle. It has domain knowledge and software strength that supports over half of the worlds largest infrastructure projects – and thousands of smaller ones too.

While designers (engineers and scientists – and public) are highly oriented to design specific tasks, it is not always the case that they can visualize – effectively – what they create. This phenomenon has tremendous negative impact on designers because, whether they are involved in competitions, working with others or simply communicating their design projects, poor visualization can cast a wide dark response.  This is critical for companies using design modeling and simulation capabilities. Visual impact matters – a lot – for real world reasons.

In Bentley Systems case, all of this infrastructure goodness comes in the form of Reality Modeling – and LumenRT exists in a constellation of lidar based technology like Pointools, mapping tools like Bentley Map, or imaging such as Bentley Descartes.

Infrastructure is built on a common foundation or platform that includes several industry sectors. Modeling, design, simulation, construction and operations all use and apply 3D. This is why 3D matters – they share common geometry in (x, y, z) axis. Beyond that lies materials and attributes. For LumenRT users though, this means however you design in 3D and apply others software for infrastructure design, the outcomes and produced content can be expressed in LumenRT. There is a high level of interoperability between LumenRT and others products.

So What Has Changed?
For years designers were content with design-only tools. Many of them saw visualization as a pretty picture – alone. All of the products mentioned above put real-world designs, based on survey or geo-referenced locations, into the visualization work flow. LumenRT show not be thought of as an add-on. Why? Because the visualizations are high-resolution, real-world in orientation, and drawn from common design databases using in building processes and design projects.

Ultimately, through the LumenRT CONNECT Edition, whole libraries of objects, surfaces and content could be delivered through the Cloud. Similarly, because other industry tools also have CONNECT Edition’s, analytical tools, for example, could be linked directly to a LumenRT work flow. Think of designers having a short route into visualization, or even energy efficiency people working with a visual context.

While the tendency is to speak about visualization in terms of communication alone, soon we will see visualization in a design context, visualization in an analytical context or visualization in an artificial intelligence context as each of these enters the infrastructure life-cycle more deeply.

We are very much at the early stages of tying design to visualization in a more meaningful manner, LumenRT is at the forefront of this advancement.

 

 

 

 

 

 

 

 

 

 

Geospatial Education: Linking Engineering Into A Geo-referenced World

A current trend in engineering is the accurate location of infrastructure assets within a real-world context. This applies across the entire life-cycle including design, construction, operations and maintenance. While local coordinate systems have long been in use for engineering projects, geo-referenced locations as applied in real-world coordinates has been more recent.

This has had many important benefits because suddenly all sorts of other kinds of work in other fields, that use georeferenced coordinates, could be tied to engineering applications and outputs. Individual buildings, roads, utilities and other infrastructure could then be tied, accurately, to extended processes and applications simultaneously. This also expanded opportunities for visualization, modeling and spatial analysis as projects could be viewed beyond individual entities. Questions such as: How does this building link to the road network during times of day? Where is the water network and how would it function if a large emergency occurred in this facility? How many people move within certain regions of this proposed project?  What is the urban planning relationship to this group of facilities across the city or the state or the country?

While map-based systems including geographic information systems (GIS) have traditionally been able to ask – and answer – many of these questions, the highly specific questions tied to engineering have not always been as easy to tie into the framework, sometimes lacking geo-referencing. This meant that more often than not engineering data were imported then sorted into proper geo-referenced positions. The struggle to achieve this gave rise to a whole industry of ‘geospatial services’ tied to conversion of data. Today, luckily, those times are dwindling and most data is captured in real-world coordinates in real-time.

This ease of data capture and the rise in available infrastructure associated data is giving rise to a need for engineering specialists who understand and work with real-world engineering data.

These people can now build models, visualizations, analysis and predictive analytics based on this data. In the future. as robotics, augmented reality, self-actualizing assets and other forms of new construction and building processes increase, there will be a greater need for capable engineering + geospatial connected people. These folks will not only be part of the engineering processes directly, but also indirectly in support and tertiary industries. Included will be the manufacturing and 3D printing sectors as well.

This will allow engineering to flourish is very creative ways that transcend processes and workflows. An example of a course with this in mind is the Geospatial Information Science and Engineering (GIScE) course at the State University of New York.

3D Visualization: Guidelines And Use In Spatial Applications And GEO Analytics

As 3D and visualization flourish and new applications involving 2D, 3D, 4D and beyond are continuing to go mainstream, the rise in business models and research understanding are branching into consumer eyes and minds at a faster pace. But do we understand the unleashing of 3D into wider domains?

city modelSome years ago 3DOK put forward the notion of a 3D Ethics Charter. I remember attending this event in Monaco and wondering what would become of the Charter and the impact of 3D on users around the globe.  The documents spoke to the rise in 3D visualization and how imagery and graphics were beginning to be used in unethical ways. For example, a picture of a nice house sitting near a bunch of topographic lines – that effectively meant the house was on a cliff – but buyers might be unaware.

Other examples of this phenomenon might include 3D BIM models that described completed buildings, however, the images did not link to underlying databases of infrastructure information, and therefore were only images and separated BIM data from other projects made to look like they connected.

This is not new stuff. Author Mark Monmonier long ago described How To Lie With Maps, a book most mapping people are well aware of.  That book included many examples of maps that told sometimes funny, sometimes sad and many outrageous uses of maps to slant views and perspectives on topics.  It’s all in what people want others to believe. Although, as we might find in the case of 3D, the technology exceeds our understanding sometimes and illegal or criminal intent may not always be present – just ignorance.

Consider the case of 3D City Models today. While a great deal of information exists about CityGML, IFC’s, 3D BIM and the use of GIS connected to city modeling, far less information exists for users to understand how these products are connected to their eyeballs and minds. One can put a 3D BIM model on a small area – or large area – if they choose to do so. While buildings look good in BIM, they look better perched along beaches with palm trees or nice rivers with long views. Again, it is all in the packaging.

In the medical realm, recent research shows that ethical issues in 3D printing, for example, may involve issues related to safety and intellectual property. Others have spoken about the ‘dark side to 3D printing‘. Gartner has also spoken about ethical issues connected with 3D printing.

In the drone and UAV realm, while we might relish the extraction of 3D data from all kinds of places -access to and duplication along with distribution of that information may be illegal and highly unwanted for security and safety reasons.

Try searching for ‘ethics and point clouds’ online. You will find little material on this topic, apparently, all lidar comes with little ethical consideration for use and applications.  Buyer beware.

While we are certainly advocates for wider 3D adoption, is is becoming more difficult not to distinguish ethical use from imagination as a selling feature. We ought to do something about that.

 

 

 

BIM And GIS: A Tale of Two Similar Evolutions In Design And Infrastructure

Building Information Modeling (BIM) and Geographical Information Systems (GIS) share similar pathways in their evolution. Both BIM and GIS both find their roots in the 1960’s. Earliest endeavours were linked to research, and emerged as conceptual projects in the first instance, later to become more technological oriented.

CIMG2108It has been interesting for many to watch and listen to different people wrestle with definitions about each. “What is BIM?” and “What is GIS?” are often heard and continue to expand and roll with new experiences and knowledge. Each can be mentioned in both conceptual terms and technological prowess. More recently, they have marched toward a greater understanding of prowess where their connected natures, tying data management to design to analytics has risen.

Traditionally, we have seen GIS begin begin with the development of data models as projects are applied to specific industry solutions. Whether it be electric, water, buildings or plant – high performance data models have been created, usually through domain professionals, then GIS have been applied (and new code written to ensure the model meets the task). On the BIM side, many projects have emerged through physical design and domain knowledge, using a set of tools to meet the challenge (again adding code where needed).

Anytime we have spoken about linking BIM to GIS has largely been a discussion about data formats – trying to get data to traverse into each other’s side of the equation. Although, scale has largely been unspoken about, since GIS has tended to look at the whole, while BIM has integrated many smaller design pieces / processes into a whole  – with many projects usually including a single building, plant or other entity.

Today, we see the CAD-BIM side of the equation moving to become more GIS-like. Suddenly features and attributes (asset details) are emerging as key components of an aware infrastructure application. Yet, GIS have always been feature, attribute and value aware.

On the other hand, GIS have failed, until very recently, to take these features, attributes and values into a wider design pathway and one filled with analytical advantage.

BIM is emerging into a 3D, visual and analytical aware environment that is striving to place analytics into the process pathway at a faster rate. This ability of BIM, to drive process observation and change – and decision making – is a key value mentioned as these applications begin to connect and link into wider Cloud constellations.

To be certain, there are specific GIS tasks that BIM software still do not include today. And, there are specific capabilities that BIM software do well, which evade GIS software in a major way. It continues to baffle many, probably, why this gap exists, although we might look forward to the day when each invites the other into a discussion space and asks the other to speak freely.

Visualization, augmented reality, simulation and 3D / 4D  modeling are quickly becoming favored due to their ability to communicate infrastructure project intent and operation.  Beyond this, the ability to link visual content back to features, attributes and values in a useful and valuable way is likely just over the horizon and coming fast. One ought to be able to touch or click on visual objects and drive other knowledge – not just a rising table of data.

The future is exciting and we are at the point of GIS and BIM crossover that includes not just technology, but also conceptual understanding of shared processes.

And that’s a good thing. It’s about time.

Be Inspired London 2016: Platform For Infrastructure Potential

Over the years Bentley Systems has continued to build a series of products that are intended for infrastructure design, construction, operation and mainetenance professionals around the globe. The impact of these products can be seen in statements like that of CEO Greg Bentley today, in London at the annual Year In Infrastructure event. He said, “more roads around the world are designed with Bentley software than any other software.”  If that weren’t enough he pointed to the fact that his company provides 44 of the Top 50 infrastructure companies with useful tools and applications to build the world.

From roads to buildings to industrial plants, Bentley has been generating impact in a cross-industry fashion, capitalizing not only on technical talent within the ranks, but also through gathering knowledge and experience that has been harnessed toward creating conceptual effectiveness.

At Be Inspired I heard story after story where exciting new projects are being created and operated based on 3D and visualization under-pinning tools, applications and concepts.

What the company grasps well, is the notion that moving into digital world can have spectacular results and lead toward new potentials. This all sounds good, and a time when everyone pushes product – far too many do not connect them together and understand where and who will use them.  There is depth present in the portfolio and significant understanding of process.

At this event, a new AssetWise CONNECT edition will enable assets within organized to connect in a geo-referenced fashion in different contexts.

assetwise-potential

These potentials are expressed through various context including change, reliability, digital engineering, reality, geospatial and so on.

It becomes more apparent after a few presentations that the march toward digital infrastructure is a truer sign of potential than all others. Digital capability sows the seeds for a vast array of tools and applications to spread and flourish within an organization. This in turn allows for improved modeling, simulation, design and construction – all leading to better performance.

It might very well be that Bentley is sitting on the next-generation of infrastructure potential. It is time for entrpreneurs and organizations to realize that their potential and competitiveness are at risk without investing toward this dynamic, new direction.

 

 

 

Augmented Reality vs GIS: Similar Development Paths From BIM to Maps to Simulation

Augmented reality and geographic information systems (GIS) share similar pathways during their development.

Back in the 1980’s when GIS were flourishing in both conceptual and technogical directions, a stir within what is now called the geospatial community was sending sparks in all directions. It was as if an entire new world had opened to swallow up fertile minds and to begin igniting a thirst for knowledge about the world through mapping and spatial analysis. More exactly, digital mapping came into fruition, putting very powerful mapping and analysis capabilities on the desktop (this has since retracted into the cloud – a story for another time).

3d-stagecoach

The timing in the 1980’s was magical in a sense, because it was the development of computers for private computers for individual use that that occurred at the same time.

This convergence gave rise to and fueled a truly entreprenurial streak that would expand widely, reflect and bounce off imagination and would lead toward great shifts in exploration and learning. Small to medium-sized businesses flourished.

By comparison, the augmented reality industry today is following a similar path. New computing technologies and hardware developments in AR are flourishing. These in turn are being coupled to visualization applications that cross 3D and 4D explorations into spaces previously impossible.

The augmented reality industry is criss-crossing both entertainment and enterprise arenas, reaching to gaming on one end and to business and industry on the other – a time rather similar to the GIS earlier days.

This is why, I think, those companies who are straddling both geospatial and AR sides of the equation are uniquely positioned to move into the future. They share common capabilities in exploratory, analytical and visualization delivery and solution orientations.

While GIS development seems to have slowed somewhat over time, and the number of smaller and mid-sized companies has dwindled with a similar smaller hunger level at the entreprenurial base, the same cannot be said for augmented reality. AR continues a march toward refinement, strongly seeking solutions and delivering basic capability at a reduced costs, before becoming more mainstream.

Within 5 years we will see this refinement begin to arrive and that will trigger the need round of development and research. One that will more closely link GIS into CAD into visualization areas.

While the AR industry continues to rise in terms of smaller businesses and mid-sized companies, along with institutional research into visualizaiton, this will probably slow in the years ahead similarly to the GIS trajectory.

All of this is good. All of this is positive. And it leads toward the future.  Perhaps, in the future, we will truly see the rise of the digital citizen. One carrying a wealth of GIS and AR capability wherever they go.  A world where measurement, analysis and knowledge converge and decision making becomes more automatic as discovery is consummed at a faster rate.

Perhaps we will see whole new technologies arrive and nudge minds into a new direction built on GIS and AR.

Perspectives On UAVs In The Geospatial Industry – Turning To Focus

Unmanned aerial systems (UAV) will likely have a varied role in the future. New innovations are being explored by many companies and individuals around the world. For the geospatial industry though, it seems that the real value of these technologies follows three paths – aerial hardware, data capture technology like cameras and lidar and data processing.

digital 3-d cityFlying alone does not have much appeal to a wide ranging geospatial industry, although, package deliver involving location determination does have spatial usefulness. Nevertheless,  Deutsche Post, Amazon, Google and others would probably orient more resources to this kind of application in a much more focused way with specialized deliver vehicles meeting unique needs.

It is on the data capture side that involves specialized imaging sensors, lidar technology and software that is capable of handling large data files that has much more appeal (coupled to flying) – and alignment to the spatial sector.  On this note, UAV are like geographic information systems (GIS) – you can use one or a few, but the data really matters and the workflow to make it live and breath.

I noticed this week that Trimble was the first to sell their UAV business ( Trimble Sells its Unmanned Aircraft System Business to Delair-Tech ). “This transaction is part of our continuing program to tighten our corporate focus,” said Ron Bisio, vice president of Trimble’s Geospatial Division .

A few years, at a time when UAVs were in their infancy and all the rage, everyone was jumping on the bandwagon. It sometimes seemed that having a UAV was the goal, rather than what it could really do both physically and legally. Trimble’s lead sheds light on the realisation that the company really has data capture and data processing capability – and that “Trimble will remain actively engaged in the market by leveraging its brand-agnostic software technology for a broader range of UAS platforms.” This makes a whole lot of sense.

Keeping in mind that there are now over 500 companies engaged in UAV technology, this is a growing and competitive industry. In a world touting on-line data deliver and processing to clients and partners, unless one can lead the pack across all three of hardware, software and processing – and without alienating data types of other customers, it is logical to focus on the data end and let the highly focused UAV manufacturers slug it out. The delivery of results, from workflows of knowledge and focus matter. Furthermore, it keeps the doors open to satellite and photogrammetry too.

 

3D Reality Versus GIS Reality – Can You See Clearly Now?

The immediate applicability and benefits of 3D in a geographic information system (GIS) world are not always seen by some, and sometimes less understood than might be realized. In principle, capturing the world we see around us is complex in whatever ‘D’ we may wish to apply. And while GIS is particularly oriented toward the database, images are more oriented toward cameras.

schloss berlinArgue as we might that computer generated images are higher forms of reality than photographs, it is difficult to avoid the fact that so much effort into 3D and imagery technology and applications is directed toward extracting ‘intelligence’ from imagery – to place it into a database. Plain images alone mean little to a GIS, but the interpreted and extracted intelligence from imagery means a great deal – and is undoubtedly useful.

On the other hand, all of the data in a GIS seeks to be visual and usually through maps. Anyone involved with maps will quickly point out that maps are representations. Thus, whatever data resides in a spatial database is directed toward an interpretation of what the cartographer chooses to represent. That is both art and science.

While maps and images can be seen in 2D, it is in the area of 3D and 4D that higher forms of visualization are realized. There are significant reasons why people want to extract 3D buildings and objects from satellite and aerial imagery. Combined with other 3D created objects and structures,  these integrated visual pieces of 3D content begin to form and enable more realistic representations.

We have already far beyond whether or not 3D has benefit and useful potential for spatial data. Simply buying a lidar sensor or working in augmented reality or building information modeling (BIM) means you are already in 3D context and collaborating in 3D space. It has always been the foundation.  Accurate data in the database has more importance to useful representation than anything else.

If there is a drawback to spatial data, it becoming less a fact of accuracy, as compared to update and keeping up with changing circumstances in a real world. While we all like to create 3D models and 3D visualizations, it is critical that we remain mindful of the fact that those models, places and infrastructure change over time.

Is it really a question of 3D versus GIS – or a question of seeing (and capturing) what is really there today?

 

 

Good, Bad and Ugly In 3D and Aerial Imaging UAV Operations

Unmanned aerial systems (UAV) have been approved for wider scale use in the United States today says the FAA, “The Federal Aviation Administration’s (FAA) new comprehensive regulations go into effect today for routine non-recreational use of small unmanned aircraft systems (UAS) – more popularly known as “drones.”

uav-germanyThe new regulations (here) are bound to create greater interest, since they effectively open the door a bit wider into the public participation space. While delivery services may be salivating at the opening, given that they wish to deliver parcels and all kinds of things – a tempered response may be in order.

The new regulations require still maintain line-of-sight operations, meaning that the UAV must remain in view of the operator. Let’s face it, if you still see it, then why not deliver the pizza, book or other object personally? This basic requirement (which most people likely understand), takes the wind out of the sails of a technology that wants to effectively become an airline cargo service.

Aeronautical Knowledge Test. Testing centers nationwide can now administer the Aeronautical Knowledge Test required under Part 107. After you pass the test, you must complete an FAA Airman Certificate and/or Rating Application to receive your remote pilot certificate at: https://iacra.faa.gov/IACRA/Default.aspx

There has been a rapid rise in manufacturers of UAVs in recent times, all aiming at the potential commercial market. Less discussed, and perhaps realised though, remains the liabilities that might arise with a 55 kg package dropping on someone’s head from 125m in the air – or so.

One assumes, of course, that your business does not operate at night, because under this approval there are no flights during the evening hours. So – you better be home when your package arrives.

All these issues aside, not much has changed for those interested in more direct UAV applicability for projects like infrastructure, safety / emergency and mapping in general. In fact, with certification anyone can become a mapper and generator of 3D geospatial data and mapping products. Especially since many of the parts and pieces are approved by manufacturers and or available through cloud servcies. One might reasonably expect that the potential for profit here is dwindling in such a large competitive morass of users, hobbyists and even professionals. Toss in 3D printed parts and services and the whole service and maintain side is reduced to miniscule proportions.

Where does all this leave us?

Well – as always, with the data and the value of any UAV obtained information. Obviously, higher value will be derived from processed end products, especially from those knowing about and understanding how data integrates and can be maintained.

This is the UAV ‘sweet spot’ .

And – there are far more manufacturers of drones than number of people who understand what that sweet spot means and where it gets realized. So aim for it.

 

Visualization And Analytics – What Does Your Geospatial Graphic Mean?

Many analytical graphics on the web today don’t mean anything. Has data visualization and data mapping become a well managed, highly technological exercise with common tools for creating graphics fast – but without any significance?

image_tunnelAre you tired of seeing bunches of lines around the globe that, apparently, describe the number of airline flights 5, 10, 40 minutes or hours ago? How about graphics that say “the city is changing, see all the colors on the map” – without much explanation of what is happening or why the colors change? How about all those great maps without any legends? Perhaps graphics of 3D objects that do not show any reference points, any supporting background information or otherwise? Why is that so many augmented reality applications show pictures, but forgo the description and explanation.

Seriously. Do the producers of this content think we transfer funds to purchase products solely on the basis of a picture – without ever explaining where the data originated, what problem it solves and the context for which the visualization has been generated?

What exactly is the geospatial sector attempting to do with respect to unmanned aerial vehicles (UAV)? We see many more pictures that dwell on the fact something in the air has made a picture, rather than what that picture is being taken for and the valuable information it can generate.

Is the geospatial sector seduced by consumerism to the point that it has forgot why ‘geospatial’ matters and ‘what geospatial’ contributes to the better good?

Simulation, augmented reality, 3D analytics and aerial sensors are capable of generating many more valuable pieces of information, and potentially, able to be used in unique and valuable ways than the case that seems to be presented – currently. More needs to be done to explain why these technologies are being used in geospatial applications and engineering and what they are contributing ‘exactly’.

While not wishing to paint everyone with a lack of explanation paint brush when it comes to imaging and visualization (because some are doing an excellent job) – my sense is that a bunch of very advanced technologies in the geospatial visualization area are not reaching their full potential yet.

Less focus on nice pictures and more focus on explaining the problems being solved and how these technologies rose well above the rest will further the geo sector in amazing ways.

Bentley and Autodesk Go Head-To-Head In The Subscription Ring

The world of infrastructure design and modeling software is undergoing dramatic change. Not only are new functions being added to software that propels building the world’s infrastructure forward, but other changes are taking place. As the infrastructure world evolves from feet in perpetual licensing models toward subscription based models for accessing and payment of products, the challenge and competition is rising. This is evident, clearly, when it comes to Bentley Systems and Autodesk.

Berlin New U5
Berlin New U5

Late last fall Autodesk signalled that it was moving toward a wholly, all-encompassing subscription based model for offering products. Bentley folks in Exton, PA probably smiled at bit at this, scratched their heads and read on – after all, Bentley Systems has evolved the subscription model for purchasing AEC products for quite a few years now. In fact, it has built upon the earlier subscription model offering to embrace an entire infrastructure related product line, one powered by high inter-connectivity, interoperability, backward compatibility and increased entrance flexibility.

So – it was not surprising this week when Bentley sent out a press release to effectively address the July 31, 2016 begin date that Autodesk proposed earlier that would effectively place users into subscription based models. These two pieces of information initiated some raised eyebrows including Carl White, Senior Director of Business Models at Autodesk who wrote in response here – “Not so fast Bentley: Separating fact from fiction.”

Meanwhile, Bhupinder Singh, Bentley’s Chief Product Officer, said, “Bentley Systems considers purchases of perpetual licenses to be long-term investments by our users, so we continually innovate to increase their value. We are glad to now extend this ‘future-proofing’ to Autodesk license owners who otherwise will lose value in their applications.”

This is a very interesting time for users around the globe involved in AEC projects and work related activities because of the views expressed by these two companies. And users should be sitting up and thinking about these statements and investigating these transitions into subscription and cloud based services more deeply – there are big ramifications.

A subscription based model is a challenge for many organisations these days. To be frank, I have changed software myself for other products because companies got their subscription models wrong, reduced my backward compatibility, costed me more and offered fewer possibilities for doing the things I needed to do that I always did.

It seems when some (not all) products move into the cloud, subscription models arise that change the product line forcing users along a new trajectory, propagating new learning curves, changed routines, functionality and requirements to buy more licenses and other difficulties.

Simply moving into the cloud is not a reason to jump up and down and clap our user hands anymore. Users have become savvy, alert, smarter and more determined not to lose value, possibility and their lifeblood (projects). This means software producers need to be sensitive, caring and alert to individual, organisational and project needs – were not simply moving A to B and carrying on.

Here is what I would look for in a subscription service:

  • Since a subscription service can enable single point entry into a product portfolio I would look at the number of products the model allows me to access for a single (group) license fee – and for how long.
  • Given above – depth of knowledge, expertise and experience in a software company across the AEC process line would matter a lot. Why? Because infrastructure projects today are not singular in nature, that is, they are interconnected with planning, design, operations and maintenance all combining. Go read the BIM Manager’s Handbook if you doubt it.  Look at case studies from the company to see what kinds of projects their software tackle successfully. Depth matters.
  • Can’t say enough about backward compatibility. Put a star beside this. Nothing annoys, disheartened and creates angst as much as not being able to use value already produced.
  • How a company links products together is important. Do they work together? Can you collaborate in a multi-discipline fashion with colleagues? Think about planning a 3D model from points clouds and moving it from planning to construction to operations and maintenance. Will the software chain follow the process steps in building a road, rail network or building? If this does not happen, then you reach a dead-end and someone will obviously say “you need to buy this license or that license”. Dead-ends are bad – and costly.
  • Ask. Ask the right questions from software manufacturers. Forget about the cloud (I know that sounds wrong) – but we all expect it in the cloud anyway. Move to the next level, ask some of the above points. Describe your current projects and ask to be shown where a product line could have fit in – “if we had this project, which included this and that and this, how would your product have been used?”

The infrastructure sector is undergoing rapid change at the moment. All of the previous infrastructure related products and expertise are now being aligned toward collaborative projects that involve many people in many places. Field mobility is an increasing part of the infrastructure puzzle as data wants to be real-time (or near it).  3D and visualisation are permeating infrastructure processes from one end to the other users want and need products, services and active support that connects, relates and speak together.

In previous times, with perpetual products that changed periodically and usually operated independently more often than not, we seldom saw organisations change vendors. Locked in bases feared the costs, problems and difficulties moving between vendors.

Under subscription based modeling the opportunity to change vendors has never been higher and within reach and a realistic possibility as it is today with these announcements mentioned above. You owe it to yourself to look and think deep about what you are in doing in AEC and how you are going about it.

Addendum: September 26, 2016

[Bentley Announces Autodesk License Upgrade Program]
[The Autodesk License Upgrade Program]

Smart Roads For Intelligent Vehicles Leave Drivers In Backseat

Technological advances in road planning concepts, planning, design and construction coupled with driverless technology are set to change transportation onto a new course. The amount of driverless technology has increased rapidly. Rolls Royce, Google, BMW, Audi, Tesla, Ford and many others are well along the design and implementation stage.

While a large part of the driverless experience is focused on the vehicle itself, smarter roads will have a prominent role to play.  These roads will support embedded devices to monitor surfaces, transmit surface conditions and provide updated infrastructure information for asset management purposes – that will enable improved maintenance scheduling and operations.

A fusion of technology, data and smarter analytics for decision making will help to guide both robotic and non-robotic vehicles. However, while these end products are amazingly unique and advanced, perhaps the more interesting and exciting story lies in the technologies and processes that will enable them to be designed, constructed, operated and maintained.

Since so many professions will need to come together with assembled technology focus, knowledge and experience this will require higher collaboration tools, sharing of data and more automation in coordination of activities – it will likely require several kinds of contractors to work together.

Such advanced construction resembles modern building information management (BIM) approaches that focus more on process and less on technology as a rule. This results in coordinated activity surrounding infrastructure projects, improved scheduling and less re-design or change during construction. It also supports early simulation and modeling to decide the best way forward.

We can see examples of this in products like Bentley Road and Analysis Software that provide multi-discipline approaches toward connecting knowledge, experience and activity related to projects. “OpenRoads ConceptStation is an innovative, new application to enable rapid and iterative conceptual and preliminary design, leveraging contextual information obtained through point clouds, reality meshes, GIS, and other sources.”

Using OpenRoads ConceptStation, you can:

  • Assemble context data rapidly from a variety of sources, such as point clouds, 3D reality meshes, terrain data, images, and geospatial information to bring real-world settings
    to your project.
  • Simplify 3D modeling with easy-to-use engineering sketching capabilities to quickly conceptualize road and bridge infrastructure.
  • Rapidly generate 3D layouts with associated project costs, and share with project teams and stakeholders to choose the best option.
  • Advance the approved 3D model to the detailed design phase to rapidly
    accelerate project delivery.
  • Share realistic visualizations with the public and stakeholders to gather feedback, improve public engagements, and speed project approvals.

 

The Virginia Tech Transportation Institute cites no less than 511 features that smart roads will enable.

One thing is sure – the driver is more likely to sit in the backseat in the future and enjoy the ride.

Does India’s Proposed Geospatial Information Regulation Bill, 2016 Seem Like Reality?

Every day thousands upon thousands new maps, visualizations and satellite images are generated around the globe. Each country has their fair share of these, larger countries have more than smaller ones – although not always. Geographic information is the foundation upon which boundaries and places become known, understood and realized. It is hard to claim absolute accuracy when it comes to locating places in space, because maps are intrinsically representations. Images as most people know – often look differently 3d earththan we think they should, partially due to technology, partially due to human perception and – mostly because of knowledge or context.

India has recently released a proposed Bill that is being promoted with a view to new legislation.  A thorough read of this document has me scratching my head, and I am sure it will others too. Frankly, it is hard to imagine any company or small business working under such legislation. Why do I say this? Because maps ARE representations. And – technology can produce optional responses and perceptions.

I am not sure why there is such a great need to effectively put risk management so high up the business ladder with many of these ideas. This Bill aims to “regulate the acquisition, dissemination, publication and distribution of geospatial information of India which is likely to affect the security, sovereignty and integrity of India and for matters connected therewith or incidental thereto.” Welcome to the global club – all countries contend with the vagaries of inaccuracy.

Not only that, the sheer costs of acquiring highly accurate data are insumountable for most countries. This is why we do not see pinpoint accurate data in most countries. In fact, if you step into some architecture and infrastructure projects, where data is highly invested in for design purposes – there is still inaccuracy and inconsistency. It is the nature of the beast.

Under these proposals, the entire United Kingdom crowd sourcing map economy would come to a halt – because of likely inaccuracy in place locations. Older satellite images would be non-useful, because extracted map data through automated means would be less valuable. We would probably see people sharing LESS data, for fear of exposure and subsequent risks. Infrastructure project costs would rise to massively high levels, reflecting the need for much higher quality data in support of transport, aviation, bridges, roads and rail.

Is this proposal reality? It does not seem so. It seems more of a wish or dream built on conceptual desire.

One might reasonably inquire, why not invest in education and resources to educate creators of geospatial and other visualization data instead? Why not educate people in data use and interpretation so they may act more critically with what they see.

It seems throwing people into court and jail is the wrong way to advance anything in this case.

Is The 3D Printing Market Failing – Really?

3D Printing is not new, contrary to what many people perceive – largely blown by hype trade winds that would make Vikings in single sail ships scratch their heads. Many people (including myself) can remember seeing 3D printers in action more than 10 years ago. Granted, they were pricey back then, which contributed to their slow uptake. But the pathway for any new technology, including 3D printing, follows a traditional enlightenment pathway that rises through hype, settles in a trough of disillusion before becoming mainstream. Using that 10 year presence, the so-called mainstream trajectory is nearer than further away. This is a similar path that lidar (light detection and ranging) , high quality topographic technology and geographic information systems (GIS) have previously followed. Time is a wonderful thing.

In the UK the “National BIM Report: The Construction Industry’s Own View On BIM-readiness” was recently released. That report signaled the slow uptake of building information modeling (BIM). Although uptake has increased to 54%, it should be pointed out that BIM, too, has been around a long, long time. In the case of lidar, we also see a technology that went through a huge hype stage a long time ago (and still does for those new to lasers) before becoming more widely popular at a lower cost and introduced into mainstream use more significantly.

Yesterday I read Richard Waters article in the Financial Times entitled ‘3D printing ‘held back by lack of infrastructure’. That article suggests it is a long way to the 3D printing paradise. It points to obstacles in infrastructure to support the use of 3D print and so on.

Carl Bass, CEO at Autodesk in a 2014 article “Accelerating the Future of 3D Printing”  where he spoke about 3D printing as emerging like a butterfly from a cocoon about to set upon a journey to the promised land of creativity. In fact, Bass has turned Autodesk into a key business driver for that company, augmenting it with products like Autodesk 123D and the EMBER 3D Printer. One might even argue that Autodesk moved out of the entire geospatial marketplace, a sector filled with 3D, into the 3D manufacturing and design space along the way – inherently positioning Autodesk directly on consumer and business 3D manufacturing needs. Earlier I wrote about the fact that HP, a primary printer company at the time – seemed to miss the entire 3D printing concept.

To understand 3D print, we need to understand 3D better. Part of the market is in 3D data creation, part in 3D data management, part in 3D modeling and analysis and yet another part is closely associated with visualization. Then there is 3D production and prototyping. BIM people will talk about 3D in infrastructure, just as Bentley Systems does here, here and here.

Let’s back-track here… If we look at the BIM case more closely, we can see that this represents a fundamental shift to 3D digital technology for use in building information modeling and design. Projects move off paper using BIM and are more conceptually and visually realistic to the eye and mind as compared to 2D with this advance. Consider for a moment that CAD software has been producing 3D models for a long time – so what took BIM so long? Do you remember when there was no cloud computing? File size mattered? Formats were hard to handle and interoperability was all the talk? And – people had no real idea what visualization could mean to project value? Well – all that changed.

Autodesk promotes 3D CAD through the cloud because it can work now. GIS companies like Esri deliver 3D city modeling through the cloud, because it is possible. Trimble, a company grounded in GPS has permeated the 3D pathway in infrastructure development building on a location foundation with embedded technologies to support software like here, here and here.

The point is – 3D and print are distinct goals. Printing in 3D only means visually seeing a representation – only – unless 3D is placed in context of the reason to print in the first place. Just as visualization had to expand and explore how communication connected visual graphics to work flow processes, so too does 3D printing.

Prototyping seems like a no-brainer. But beyond that, the entire educational re-purposing of 3D printed models is largely unexplored in many industries. There are few courses in explaining why and how 3D printed objects matter and how to interpret them. Although  augmented reality has shifted to understand the association of materials and colors in the production of ‘what-if’ options visually, the same has not been done much with 3D models that are printed. And while manufacturing is the single most obvious likely user of 3D printing from consumer through to business use, to many bottlenecks seem to exist between 3D imagination – to – product in front of us in a timely manner.

The technology is there. The ability to capture, create, manage and digitally visualize is there. It would seem that the part that is missing lies in production to delivery. And here is the key…. Since we don’t have an example what if the first production prototype is not the one we really want? What of it needs a change?

Someone needs to come up with the concept to “produce a large number of 3D printed options /w changes for a single price” or “produce up to 10 options for a single price” (size limitations of course).

Just as geospatial, BIM, satellite imaging and mapping have become mainstream, we can expect that 3D printing will become more mainstream. We are much closer to that time than further away from it and it would seem to revolve around 3D thinking, understanding and associating 3D with value – not eye candy.

GIS In 3D – The Case for 360 Degree and Panoramic Mapping

Can 360-degree or panoramic imaging deliver benefits through GIS approaches? The capability to mosaic or stitch images together provides the means to create larger images – it is the basis of stereo photogrammetry and has more recently been deployed using a collection of unmanned aerial vehicle (UAVs) or drones.

city modelMoreover, these combined images are usually geo-referenced. That is, features upon them are located in real-world space – features can accurately be measured for shape, size and form. Many users of imagery will quickly point to the benefits of imagery for providing other intelligence through color, texture and object identification. All of this is not new and has traditionally formed a large part of the geographic information system (GIS) and geospatial toolbox.

Consider for a moment that most mapping and spatial analysis occurs from a single perspective.  This can be seen where an airplane or UAV flies over a terrain at the same height – and images are tied together to through ground control points to form larger images – and all sorts of map-related products are derived such as stereo-pairs, base maps, GIS layers and contour mapping.  Some objects appear closer  (higher) while some appear further away (deeper) from average ground level.

By comparison, if you take mobile device such as a tablet, mobile phone or hand-held camera, and using a panoramic capability and make a large sweep from left to right, a panorama will be created.

This panorama will result in a single principle point where the camera device is located and a series of conjugate points that are collected and automatically processed. In other words, the same kind of picture or image that is normally taken from a perpendicular perspective from an aircraft or satellite. Yet, we do not see many people using these 360-degree images or panoramic images in a GIS or map-related software.

Why do this? While it is easy to create a top-down or perpendicular oriented map and the extracted intelligence, a top-down perspective does not provide a side-view orientation that is capable of providing cross-section intelligence and changes on the walls and surfaces originating from ground viewpoints.

using this approach, one might reasonably assess community level characteristics at the ground level through analyzing these panoramas, such as quality of street views and sustainable communities, easy of movement and visual cues and quality of surfaces and so on – and change over time.

Are we seeing a limited view of our world through primarily top-down mapping? Do we accurately see our communities from people-perspective with traditional (only) top-down maps?  Can we access infrastructure and building walls and surfaces over time, accurately? Are we only partially involved in navigation strategies that are primarily focused on location rather than attributes in spaces alone?

The entire concept of 3D GIS and 3D CAD is highly oriented toward physical structures and the viewpoints related to these physical locations. However, those interested in surfaces and details from a side-view perspective would gain other intelligence and useful data from the side.

Image what would happen if Google Street View were to take all that imagery and cartographically create products from street views stitched together. Imagine those same vehicles remapping those streets in 5, 10 or 25 year intervals. Old objects would give way to new objects, surfaces would change and viewing perspectives would be altered. The character of streets would change and spaces would provide more intelligence, thereby more accurate assessments of the quality of communities.

Both 3D and visualization can make this happen. And while we have a wealth of GIS tools and map technologies today, and we can create 3D infrastructure, our ability to truly monitor and understand spaces on ground level terms is quite varying and or limited.

Can Virtual Reality and Augmented Reality Realize Their Potential?

Virtual and augmented reality are expanding at a rapid rate. They continue to grow with, sometimes, wild market estimates about their potential of growth. many people consider these technologies to already be mainstream – due to their already existing wide presence across numerous industry sectors.

We can consider augmented and virtual reality through two separate lenses. The first being the user or consumer orientation – people picking up these technologies and immediately applying them in a useful. The second kind of user is not only a user, but also a developer. In the widest sense of the word, involving innovations in AR and VR across hardware, software and services.

The later will likely drive the market into more focused areas, offering deeper engagement to solve industrial and design problems and tasks. Their demands will enrich, enhance and stimulate greater market growth and expanded opportunity.

SoundBrush 3D

While the gaming industry is likely to be the largest sector to capitalize on these technologies, the industrial training and education market will also become a larger audience. Accordingly, VR and AR solutions that are directed at real-word problems and solutions will necessitate more detailed solutions, domain specific knowledge and experience – the human element entering into the technological boundary.

Just as general map users make up the bulk of the map user community around the world, including consumers from all walks of life experience, a coinciding professional mapping body more interested in enterprise and industrial details or urban planning, exists to push the technology into deeper spaces.

It is hard to imagine VR and AR expanding to meet wild market estimates without a corresponding level of energy put into educating and familiarizing VR and AR users base (both consumer and developer) about becoming more spatially oriented in visualization and understanding domain specific knowledge in 3D / 4D space – and human context.

 

Interactive 3D Modelling and Visualization

Interactive 3D modelling and visualization provide the opportunity to couple and early design and visualization together. As designs are created and changed, they can be visualized. This provides immediate and quick opportunity to see – and communicate issues about designs. In collaborative environments or projects, this also enable many participants with the opportunity to work apart, yet togather, while effectively capitalizing upon computing resources and cloud infrastructure.

Boundaries x Income
Boundaries x Income (Image:Bengler)

There are several companies that provide this kind of modelling, including Rig, Concept and Design, Tinkercad, CL3VER, Verold, Sketchfab, Forge, StructureStudios and others to name a few.

Since many common GIS and CAD-based tools are capable of coupling design with visualization in raoid manner, these too can be classified within the same group.

Esri Releases ArcGIS Earth – and why this is significant

For many years we have seen globes that have allowed people to publish data to the internet. Most of the early ones, were focused on 2D data and were quite visualization oriented. That is, users would create a map then literally ‘paste’ it upon a globe. Far too often the link back to the data driving the map was missing or simply not available – sort of like a few State’s in Germany that continue to publish high quality maps to PDF files and distribute them around (which perplexes me).

3d earthArcGIS Earth can be seen as an evolution from Google Earth in a sense. Although Google travelled down the Google Earth route previously, many people were itching to hook professional and non-professional GIS into a 3D globe. Google eventually extended their application to include more mapping functions, but few companies or organizations could match the capability or depth that Esri would bring to a 3D earth-like globe.  It should have been done a long time ago, and Esri has been slow to envision and get this one going. And, it will take some imagination to keep it developing to where it might grow and evolve too.

The strength and significance of this announcement lies in the fact that it ties heavyweight, professional GIS in all its functionality to a real-world 3D globe. This is much more than pasting maps on a globe (though I’ve no doubt that its free availability will get many of those – which is good).

For professionals though, this means that the functionality of GIS at the 3D modeling, visualization and statistical level can leverage the product into something more than mapping alone, in a unique way. It also means that data and visualization are more connected.

Google Earth gave up on 3D in this way, eventually selling that portion to Trimble and the 3D Warehouse that enabling everyone to create and publish 3D models into the real-world globe. We can only hope that Esri recognizes anyone and everyone wants to connect to such a globe. Esri will need to let people own their own data, and be clear on that. More than a few had those issues with Google Earth which made publishing 3D content to Google Earth problematic.

Further down the road I think Esri could head into ‘Geographic Search’ in a very unique way – a place no one is now at that level. This would add a lot of industrial and manufacturing depth into an already successful 3D map company.

 

What Makes A Good GIS Question?

Good geographic information system (GIS) questions are not easily created. They can be mysterious to some, unknown to others and cause others to scratch their heads in curiosity. Since most questions posed to GIS are relatively simple – where is this? Most users have their answers satisfied and move on. But a much smaller number of people seek answers to more complex questions that are not immediately available – or even understood.

aerial image berlin
Aerial image Berlin

You will often hear many people who use spatial data speak about the fact that most users don’t want depth, they want to locate things – simply. Fair enough. But this post is about the other group, the one that does seek answers to deeper questions, more complex problems and often unknown (as yet) phenomenon.

Think of it this way. In agriculture, about 15% or less of all farmers grow over 90% of the food on the planet. These are advanced farmers, often deploying the latest agronomic techniques and methods. They known agriculture from many angles, and tend to follow the value chains that emanate from soil – markets, demographics, laws and other external factors. They tend to be advanced and on the edges of new technology and are often considered early adopters. They try new stuff, fail often but in doing so they realize profits through connections to the valued, deeper questions and answers. I would suggest that less than 15% of all GIS users (like advanced farmers) are in their group too.

These advanced GIS users will struggle to implement GIS, bit come at it from different angles including their own systems, cloud applications or simply hiring the whole thing out. This group will ask questions like:

— An Insurance Person – “Where are those areas with flooding in the last 5, 10, 25, 50 and 100 years and how many people and what number of hectares do they involve in each county, province or urban area?

*I want to produce a risk rating map for a city that shows those areas where building type and construction types lead to the fastest rates of spread.”  How many have seen fires in recent years and what is their relationship to fire stations and emergency response in terms of financial loss and loss of life?”

— A Climate Person – “Given the current trend to rises in temperature, what are the risk factors to rural areas within 100km of cities, where greater than 50% of the economy depends on agriculture – rate those factors from highest to lowest.”

“At present rates of precipitation and changing climatic activity, what flood control capabilities do we have to meet potential increases and what levels are we likely to see – and where will they be located?”

— A Small Store Owner – “What is the population demographic for my village or city and the age group distribution?” “Of this group, what level of income is spent on the kinds of objects I am thinking to establish as a business for sale?” “Where do they live and which is the best transportation access to where I might locate my business?”

— A Busker – “How many tourists pass certain locations at which times of day?” “Which train stations have the best locations where people move slowly, and where are these locations?”

— An Urban Planner – “I want to know that numbers and locations of people on slopes greater than 3% but less than 5% on south-facing hills, with 4 km of urban transport systems, who have access to grocery stores within 3 km and drive two cars.”

— A Student – “Where are the cheapest 2 bedroom apartments within 8 km of the college and have access to bus services. These should provide a rating based on €100 increments and ought to consider good and poor coffee shops within walking distance.”

— A Construction Operator – “Which buildings have the highest energy efficiency within the city, where are they located and provide a rating of their performance efficiency. Also include maintenance cost ratings and potential increases in property value based on regional investment in the area for 5 year increments.”

It should become clear that many of the people asking complex questions of GIS are directly tying operational value into the answers that they seek. They ask about more than locations, and often want to know and realize answers to real-world questions with significant human and financial consequences.

Finally, you will not often hear about these complex applications. Sometimes, but not often. Why? In most businesses and larger corporations, these systems are providing answers to business models that they businesses are built upon at the foundation level. These owners tenaciously protect their GIS because they represent intellectual property and enterprise knowledge.

Don’t be fooled, just because you do not see them in practice or know about them, does not mean they do not exists.

15% or less of all GIS are likely driving most of the world’s business and personal activity in some way, shape or form. Just like farming works.

 

What Makes A Good GIS?

It can be intriguing to read what makes a good geographic information system (GIS) sometimes. Part of this stems from various factors, biases and relationships based on who is writing the report. In other cases, what is actually being promoted as a GIS is not really a GIS – as expressed by well known and published, definition. In their basic forms, GIS should be capable of performing 4 basic functions – the better on ones not only perform these, they excel at them.

digital 3-d city

1. Data capture – Refers to the ability to import, acquire or capture spatial information. Since GIS are not sensors per se, they are seldom bought and installed to capture data alone. Instead, they are usually capturing data from a broad array of sensors and data capture devices, whose various formats can all be ingested into a GIS. For this reason, more than any other, interoperability matters. If you cannot get the spatial data into the system, then the GIS cannot use it. Forget about whether or not it is the best, open source or most highly rated – focus on the real ability to capture data and USE it within a GIS. This is basic. Primal and necessary. It applies to GOS, aerial imagery, lidar data, others spatial formats etc.

2. Data management – With data already captured and in a GIS, attention turns to the ability to manage that infromation. Data base functionality and performance is a critical factor in effecient and useful GIS. Since data may be either spatial or non-spatial in nature, and derived through others work and different kinds of data tables, the ability to link, join and query these various tables is crucial when trying to ascertain answers from data. Think about what you want to ask from a GIS then determine “if this GIS held my data could I ask this or that kind of question and get a return?”

This will immediaely tell you one of two things: 1) you don’t have the right data collected and / or 2) your GIS does not have the capability to return the questions you want to ask. Be aware that data mining and exploration although often associated with the 3rd stage (next), can largely depend on data management ability here. Automated data management practices and insights can deliver benefits here.

3) Data analysis / modeling – Probably the most valued part of a GIS has always been the ability to spatially analyse and model data. It is also the least written about, since so many applications today are concerned with mapping alone – locating places on a map. It is the spatial analysis area, more than any other, that will distinguish your business or results from others most distinctively. This is why it is important to have someone who knows about spatial analysis if you are running spatial databases in a business. The ability to perform network analysis, overlay analysis, principle component analysis, 3D spatial analysis and high level discipline focused analysis leads to real answers for real word problems. Keep this in mind when everyone around you wants to dumb your spatial data down and thinks GIS is solely about putting dots on a map. The trend is toward higher level spatial analysis in the future, more integrated solutions. You see this, for example,  in building energy anaylsis, structural analysis and designs, 3D modeling and advanced visualization. It also leads to a proliferation in visualization.

4) Representation and visualization – Traditionally this has meant the ability to produce a map or drawing. Better GIS include a large number of cartographic based capabilities. Trends are toward outputs of high resolution graphics, animations and 3D graphics generation today. Photo-realism is a key but not in every applications. Some people need high quality drawings with detail, while others need high quality schematics. Moving data (mobility) means paying attention to visual propogation across devices for spatial data. Animations and ‘change over time’ applications are more prominent now and links to output results into high quality visualization software are important too. A good GIS keeps all of these issues in mind.

 

Bentley Year in Infrastructure 2015 – Visualization and 3D at the Forefront of Governments

Visualization and 3D related technologies are propelling governments to the forefront in urban plannning, infrastructure design and operations. Only a short while ago one would find 3-D technologies rarely in use. A seemingly lack of tools and knowledge permeated the government scene. Now – a few short years later, we can that a major change is taking place. This is reflected at the Year In Infrastructure 2015 event where businesses are delivering projects to governments, and even collaboratiing with them,. together, within 3D environments and visualization spaces.

Examples of this can be found in airport operations at the Sydney Airport which is undertaking a facilities management program based on 3D technology. The Singapore Land Authority, no stranger to high quality survey mapping and geospatial data creation is now building a 3D model of the entire of Singapore. Meanwhile Shanghai Investigation, Design & Research Institute Co is delivering advanced 3D-based pump station designs into the massive infrastructure projects in China.

Clearly, 3D and visualization are having major impacts on government projects these days, and the future will see more of this explosion in 3D.

Bentley Systems Turns The Infrastructure Ship Directly Into the 3D Wind

At the Year In Infrastructure (YII2015) annual event in London, UK today a change was being stirred and and a new direction was struck as Bentley Systems turned the mighty infrastructure ship into the 3D winds. The course is now set across product lines and CONNECT Services to unleash some of the truest values of 3D data we will see in the days ahead. Those days of pretty (and empty) 3D city models and un-intelligent infrastructre will vanish rapidly.
yii2015
Few of the products spoken about today did not have a strong 3D reference – nor a visualization one. Included were ProjectWise Concept Edition, Bentley CONNECT Services, OpenRoads Concept Station, ContextCapture and presentations about new AssetWise Amulet V33 and the wider convergence of IT with OT data.

Clearly, Bentley is about to achieve considerable gains in rule-based simulation, 3D design and 3D geospatial related activities. Value is being extracted from ContextCapture 3D technology through recent acquisitions and much of the discussion included advanced modeling and simulation within a CAD-GIS framework – unlike ever before.

It was inspiring, eye-opening and downright revolutionary. Today the ship turned to a new course in such a way that one might have hoped for, but did not expect so soon.

Keep your eyes looking forward into the wind. You are in a for ride.

Connecting The Be Inspired Infrastructure Dots

The Annual Be Inspired Awards will begin next week in London once again. The finalists have been selected and the event promises to deliver not only intriguing, interesting and wonderous projects that exemplify how the world’s most challenging infrastructure projects, but it firmly orients minds toward inspirational objectives.

YII2015-opEd

We at 3D Visualization World have been attending this event for a number of years now. What began initially as a meeting of minds in a collegial atmosphere, has emerged to become an infrastructure highlight – describing, presenting and discussing how global challenges are being met and solved. Make no mistake, clean water for residents around the globe matters, efficiency and availability of energy and power matters, safe and useful transportation facilities and networks matter. But BE Inspired does not stop there…

Today, we see advances in technology and innovations in construction. These emerge from all corners of the globe. And as one might expect, these advances are maturing to provide more integrated solutions that are more analytical, more connected and more informative. New buildings can now take on new forms and construction techniques, they are planned using advanced modeling, 3D technologies and visualization. Water systems deploy the latest network simulation, modeling and analysis techniques to ensure water travels from Point A to Point B effeciently ensuring costs are reduced, energy maximized and quality ensured.

While most people seem aware of the entertainment aspects of 3D and visualization, fewer are aware that the basic building blocks of the world’s infrastructure has been built using 3D and visualization tools – and for more that 25 years already. There is a wealth of 3D and visualization possibilities sitting in the desks, cabinets and vaults of the world’s infrastructure projects – all awaiting to be integrated into operations, maintenance and new plans.

BE Inspired Awards also inspired young minds. And for these minds, the opportunites in sensor technologies, visualization, augmented reality, 3D point clouds and 3D modeling should not be lost. All that you need to know to model cartoons and animations, is transferable into the infrastruce domain – both ways.

While the BE Inspired Awards often speak to the issue of inspiration, and we have sat in the audience seeing the nominees in sheer awe, perhaps it is time to acknowledge that these awards give hope as well.

In a world of limiting resources, often aggravated with destruction and natural emergencies, demanding commitments due to population challenges, we sometimes lose track that we can overcome.

The Be Inspired Awards 2015 is a good opportunity to see inspiration and hope in action.

Has Geo-Intelligence Failed The Refugee Crisis?

Ask anyone involved with geo-intelligence tools and technologies and they will eloquently, and comprehensively, explain the vast array of geospatial technologies that are at work around the world today. These tools and technologies are magical, but in the case of the refugee crisis (and other emergency phenomenon), one must ask – “is the magic visible to the eye and mind?”

We are great at collecting data, arranging it and talking about databases. We are less adept at explaining what it means, not only to our immediate colleagues and partners, but also to the public at large – and even less able to translate it into policy. It is this capability, more than any other, that needs work – and lots of it, across the geospatial industry.

3d earthWhen we speak about intelligence, and geo-intelligence in particular, we are often not speaking about highly sensitive information (though some may be), and usually not talking about secret details that exist in the shadows. Much of the so called ‘geo-intelligent’ services are oriented toward collecting accurate information about every day things and activities.

On this note, one might ask if our satellite imaging, aerial imagery, map knowledge and GIS analysis capabilities are living up to our expectations. Have we demanded this information to serve us better, and to translate into workable policy and governance issues?

That thousands of migrants suddenly arrived in Europe in the snap of two fingers should not, by geo-intelligence standards. be out of the imagination. It should  have been expected.  Take a look at the young Dutchman Thomas van Linge, who in a recent Der Spiegel article entitled ‘Islamic State: The Dutch Teen Who Maps the Jihadists‘ is able to string together highly effective overviews of the current fluctuating boards in the Middle East (used by leading news agencies around the world). To say that his work is not intelligent, is an understatement. This is geo-intelligence at it’s finest – and in simplest form. It ought to cause one to scratch their head and see, unequivocally, that people are moving. 

Satellite imagery of the changing state of Middle East cities and villages for the last few years, should in the least, cause one to scratch their head and wonder: seeing the changing landscape of altered, destroyed and lost homes and buildings. Surely people are on the move.

Geospatial technologies and geomatics have long focused on the technical, included basic theory, but have largely avoided capitalizing on these technologies in a policy sense. Consider for a moment the European Directive that gave rise to INSPIRE, the trans-border and EU-wide initiative to standardize numerous spatial data types and Annex, to enable the EU to deal with issues crossing borders. For all it’s knowledge, and it has been good, particularly in environmental factors, but has lacked in terms of people demographics in relation to movement, landscape and distribution. One might ask, “why hasn’t INSPIRE for all it’s cost and effort, met the refugee challenge straight on with more maps, more transportation solutions, more demographic understanding and aligned funding provisions?”

Make no mistake, the technologies are powerful – but we need to ask, are we translating their capabilities into transferrable policies that support living, growing and understanding our solutions as well as we need? We have lots more work to do….

 

GIS Interfaces and Frameworks – Advancing Spatial Evolution

Over the years many people have asked me – “what is a GIS?” or “what is spatial?” and my favorite – “that’s not GIS, is it?” Depending on who you are talking to, where they work or who they represent or most often, their background – you get different angles on what for all intent and purposes amounts to the similar things.

digital 3-d city

Here are a few constants one might think about to begin with that I can think of:

  • geographic information systems (GIS) usually recognize GIScience is part of their DNA. That is, spatial science concepts are involved and functionality (that are only available in a few GIS are included). Managing geometry, performing overlay analysis and spatial algebra and strong orientations to advanced modeling and analysis by discipline distinguish more professional GIS.
  • CAD is spatial. Most usually have geometry well, many perform cartographic tasks effectively. Today, more advanced spatially oriented CAD have very strong modeling and simulation capabilities – often surpassing GIScience knowledge in some domains.
  • GIS have somehow accepted that the ‘map’ is the final destination. Not all of us believe this. ‘Spatial’ embodies all things and the capture tools (ie. lidar, photogrammetry, remote sensing etc.) are often applied outside of the map line-of-thinking.
  • While consumers (many people talk about 75% of all geo activity being map routing / location applications) appear to have dominated the discussion. I would suggest that the less repetitive, higher value and more societal dependent applications are likely connected to infrastructure, health and environmental applications. In fact, I would suggest that the value curve ‘consumer-pro’ is inverted – like an iceberg.
  • An entire domain of visualization, simulation and advanced augmented reality is not truly being represented in a spatial science context. The connection between GIS and augmented reality, for example, is uncanny for it’s natural alignment – one based on “what-if?”
  • Professional GIS most surely extends well beyond the map. Non-recognition of that is a very narrow focus on the truer power of GIS and GIScience itself. It ought to embrace building information modeling (BIM), augmented reality, advanced visualization representation, more statistics, connect with human spatial factors, expand on spatial training and education methodologies, include economic modeling and embed quality of life or performance indicators.

How we work with spatial data and concepts has not really changed much, although the tools have become more powerful – and are exceptional for their collaborative possibilities today.

Interfaces usually embed the same kinds icons, often become over-laden with functionality that has created a whole generation of disenchanted users complaining GIS is complex and confusing – without ever figuring out the problem other than to more solidly make the map a more central focus. In this regard, automation and simplification have succeeded to produce maps quicker. Though, one might argue, do users usually know what the automation techniques are creating?

At the end of the day, customers matter and if businesses succeed and quality of life improves, then that is success. And it is usually measured on a financial balance sheet. But, does that always mean we are doing something to it’s full potential? Reaching out? Exploring? Expanding our comprehension?

My observation is that 90% of GIS activity today is oriented toward making a map – and usually a very fast navigation oriented map. Is it any wonder GIS interfaces have stayed the same more or less?

Stripping away the map context and thinking about GIScience in terms of ‘spatial context’ the world suddenly expands. Arrangements of shop hardware in industrial floors and spaces to increase production is a spatial simulation task. Understanding the energy dynamics of a building is one thing and can be analyzed in CAD software, but understanding the contribution to the energy factor of a building from nearby buildings of various material construction is another spatial perspective.

For all of the change detection that we do with imagery today, why aren’t there more discussions about how communities are changing and attempts to simulate changes?  The soon upcoming work revolving around automated vehicles, sensors in roads and tolling systems in Europe, for example, is a high level, integrated operation that is currently technology oriented, but lacks more in depth community and spatial analysis. Don’t vehicles in automated road networks actually go – places?

Traditionally GIS have held that capture, management, analysis and representation are included. The later has somehow evolved to be oriented more or less to mean a map. Yet, looking at building information modeling (BIM) – which actually includes the word modeling – the process and design factors are key elements and maps, while produced, are only part of the bigger BIM picture. The orientation of an overhead crane on a construction site has major relatedness to safety and efficiency factors on the construction site, and it is a spatial analysis function that connects the dots to make this happen – not a map.

A map orientation only, prevents otherwise spatial thinkers, from understanding how GIScience concepts can be embedded into wider spatial practices. Mention the word ‘geo’ – everyone  connects it to ‘map’. Mention the word ‘spatial’ and most people either do not know or think ‘space’ and ‘measurement’.

Additonally, what we understand as map today could easily be inter-changed with visualization, rendering, animation, holographic, design graphic or drawing. But, we often find specialized tools for quality delivery of these functionals is either stand-alone or unavailable, in the worst cases, through lack of interoperability.

Imagine a GIS that presents an interface that is designed for you. One that understands the discipline or topic that you are working on, and suddenly re-orients itself to your experience, knowledge, job and task. The full power of ‘needed GIScience’ is presented and operating on your side – whether you know it or not.

The upside to this is that the software would fill in the blanks for all the knowledge and understanding you and I not only have – but also what we do not know. Why reinvent the wheel when the interface should truly be intelligent. Imagine having a bunch of very bright minds sitting around your project at hand, without them actually being there – but present through the software interface.

How intelligent do you think GIS have actually become?

I’m betting that many of you are scratching your heads thinking – “where did we make the incorrect turn and how do we get back on the spatial road beyond maps alone?”

HERE Maps Sold – Good, Bad or Indifferent?

Nokia continued the transition march last week that saw the company sell off the final well- known piece of the once mighty mobile phone and mapping giant. It announced that Nokia Maps would be sold for nearly €3 billion. This plan was not wholly surprising since the company had been hinting around the edges at a sale for some time. The sale significantly alters the mobile map landscape, and it also shifts a focus from what was once a map-cartographic oriented and focused business into a supporting role for auto manufacturers.

connected-auto
Connected auto

Clearly making good cars is the bottom line for buyers AUDI AG, BMW Group and Daimler AG. Some readers may not know that Volkswagen almost wholly owns AUDI AG. So – in essence the German carmakers have banded together, and contrary to some great plot to pursue Google or TomTom, this deal is about standardizing practices and processes for these German manufacturers along with other businesses pursuing advanced mobile ‘intellgence’ offers in the German industrial sector. The deal goes a long way toward streamlining first-to-market advanced transportation intelligence systems.

In a wider sense this will play into connected tranportation initiatives currently on the horizon from Brussels that will favor intelligent transport in the wider EU. Once there, the rest of the world will follow. Sensors in roads, driverless autos, environmental mobile sensors, advanced real-time 3D visualization, 4D analytics, robotics and more advanced auto-road interactivity over the cloud will be potentially possible – further down the road…

But, the sale does raise interesting questions like – why would Ford, GM or Honda (or others) want to connect with this? Would you buy a car for the mapping system – especially a proprietary one? Why not continue with your TomTom ( Note: I use  a TomTom 740) or – why not just go wholly open and use OpenStreetMaps? The later generated by users in an open source community.

I’ve followed Nokia Maps for along time, and even though I cringed at times using what was once called Ovi Maps, the product has gotten slowly better and even emerged in 3D space in more recent times. Still, as I wrote earlier on this blog, the key to any of these mapping applications lies in their updates. It is extremely costly to update these maps on a global level, and surely the car makers don’t want to see lawsuits emerge because their own mapping caused drivers to think left when they should have been thinking right.

For the wider geospatial or geographic information community this represents yet another acquisition that follows the trend of geospatial companies getting gobbled up, then to vanish into the landscape of larger companies that either do not have the resources or the willingness to really invest in their truer innovation.

As such, this sale is both an opportunity for new innovation and a loss from a more strongly oriented geographic-based focus.

Time will tell what this deal really means – cross your fingers.

Connected Farms and Agriculture

Agriculture has always been at the forefront of innovation. Food matters and the production of food requires an extensive federation of systems that connect the field to the dining room. What makes agriculture so interesting is the fact that it combines an extraordinary variety of disciplines, each focused on conceptual and technological advantage alone and in an inter-connecting way. The internet is a conduit that passes through each of these connections.

agfarm23GE Intelligent Platforms recently published an article entitled ‘The Internet of Things Inside Grain Operations‘ that points to some of the gains that the Internet brings. It identifies the monitoring issues that grain elevators are now supporting through enhanced networks of cloud-based services.

Bayer CropScience, Syngenta, SAP, John Deere, Dassault Systemes and even the European Commission have been hot onto the trail of digital farming and connecting high-end digital networks into the food production workflow.

The later suggests that 3 broad areas including precision famring, robotics and big data are driving 10 key areas where innovation will take place. For visualization and 3D interested parties, these are promising opportunities that embody imaging, exploratory visualization, mapping, automation through robotics and spatial data analysis that encompasses end-to-end workflows along the food chain.

Generally, most of the food around the world is grown by less than 20% of the producers. The average age of producers is over 50 and this plays into the fact that cultural barriers will make this transition to digital farming a slower than expected journey. Nevertheless, new cloud-based services that integrate throughout of knowledge to decision making are helping to take some of the mystery out of what was otherwise highly technical, and sometimes difficult to understand.

On the education front, it has been difficult to get younger minds more involved in farming. That is now changing as a host of Internet and digital technologies for farm use, are knowledge-based and therefore have wider re-purposing capability – meaning what young farmers learn is of high value in other work now – as well. Accordingly, how food is being produced will change, and the capital and knowledge needed to make it happen will also undergo change from different optional choices.

The key to understanding this transition lies in embracing the extent to which knowledge and capital can traverse traditional approaches, provided minds change – even just a little.

HERE Mapping Division: To Sell Or Not To Sell – or Find The Way Forward

What’s up with Nokia’s HERE mapping division? It has been reported during the last week that Nokia is reportedly about to sell the HERE mapping division. Even though CEO Rajeev Suri says it is in no hurry to sell one of the world’s most widely used mobile navigation mapping system – used in Windows, Android and iOS devices worldwide.

With several reports suggesting that the division might be sold in the $3-5 billion range some reports suggest the list of interested buyers has narrowed to 3 (although I would strongly disagree), let’s take a step back for a moment an take a closer look that the possibility of this sale and some of the drivers behind price and usefulness of the technology.

nokia lumiaEvolving technology
Technology professionals following this technology for a long time will tell you that it emerged from serving the basic need to provide maps in digital form. More pointedly, to support digital mapping such that it included algorithms largely evolving from geographic information system (GIS) technology – specifically Network Analysis. If you know where the streets are and connect those databases to addresses, then you can route between points or find a location.

Most of this kind of mapping was done when the company began as NAVTEQ before being sold to Nokia for $8.1 billion. Yes, almost three times the price currently estimated as the selling value.

The data rich features embedded into HERE mapping technology that you see today has, in fact, significantly increased the value of the mapping data. But it only emerged with the advent of social networking apps, lower cost mobility devices, faster networks, appropriate applications (apps) and cloud computing at reduced cost – all in recent years.

The basic foundation for HERE mapping products relies upon high quality map data. It is the air the platform breathes. Most suggesting that the division is simply an acquisition fail to understand that this raw data needs constant and continual updating. Roads change, routes change, new construction takes place and non-spatial information linked to the raw location data changes too.  So promising has this technology been as it emerges, that NOKIA itself established a $100 million Connected Car Fund last spring.

The promise of a connected car fund evolved from the growth in Advanced Driver Assistance Systems.  These systems have been discussed for 10 years or more as the concept of intelligent cars as evolved. This discussion is reaching new heights with advancements in driverless technology.  Virtually every car-maker today is working on a driverless variant and HERE is the product of choice for about 80% of car navigation systems. Consequently, Nokia has a fairly narrow field of view when it comes to mapping data and emerging trends for that data otherwise.  This, in my opinion, is why such a low price is being offered for the division. 

Trends Dased on Spatial High Quality Data
When Nokia sold it’s phone business to Microsoft it sold one of the world’s most advanced mobile phones (Lumia) that included the HERE platform. But another significant part of this story lies in the 41-pixel camera with PureView technology.  This remains a very advanced high resolution camera – one that could serve other trends emerging outside of automotive navigation alone.

For Microsoft it makes sense as an acquisition because the platform links to high quality 3D model data emerging from Microsoft’s Ultracam that is used to take aerial imagery.  For Microsoft this means both aerial and on-the-ground mobile device imagery are high quality and can be connected readliy. More importantly it means a range of visualization and animation technologies that process imagery can also be more fully developed. It is not surprising that the recently announced Hololens technology is being discussed as the product, for example, will depend upon comparing real to augmented reality representations.  Consider, for example, looking at a room design that needs high quality imagery compared to virtual walls, furniture or other object.

The 3D trend began to emerge at Nokia around 2011 wherein the Ovi Map products were touting 3D city models. Around that time the Swedish company C3 Technologies was developing 3D map technologies and was connected with Nokia, although it was finally acquired by Apple.  As a result mobile device manufacturers began the grand shift toward more advanced mapping and location analysis, embedding maps into many applications and enriching them with user data.

In terms of visualization and animation, Nokia HERE have in some ways missed a profound opportunity to capitalize on other markets outside of the automotive sector. The Windows mobile device products have spawned new initiatives in the design, architecture, engineering and science area. Connected application are now beginning to emergy that include augmented reality built around maps, industrial asset management, building information modeling (BIM) and sensor applications tied to locations.  The iOS platform is seeing similar applications being developed.  Meanwhile, Nokia HERE appears stuck on positioning it’s technology to place a vehicle on a road (again, why it is so under-valued).

Engineering, Science and Industrial Mapping
While Nokia HERE has successfully navigated the consumer application space for mapping to it’s logical endpoint, Google and many others have risen in capability to do the same. That Nokia see’s this endpoint as the time to assess the mapping division makes sense, if the view perspective is consumers.

There are significant forces emerging around the globe in the infrastructure and design space that ties engineering, science and industrial applications forward with foundations built on spatial data. In the United Kingdom, for example, BIM underlies the entire building and construction industry through legislation. How things are planned, design, constructed and maintained depends on location data and mapping. Where assets are, how they are moved, where they are maintained and so on … all depend on mapping and visualization. Nokia HERE and it’s accompanying technologies have drifted around this space to large extent – which is unusual for a company now seeking to become more involved in cloud and network services that actually service BIM well. So – is Nokia about to diverge itself of a technology and capability that forms, potentially, part of it’s own Alcatel-Lucent strengths?

There is a digital city shift underway toward intelligent buildings that includes energy management, utility distribution and performance, location of spaces and places. Transportation systems including new rail and aviation and movement of people through spaces are about to become more sensor and digitally connected. These connections are likely more oriented toward digital mapping and, in particular, more 3D designed.  Suddenly all kinds of data will connect and tie into mapping, from robotics to engineering to science such as geotechnical and emergency services and response.

Is Nokia HERE Worth $3-5 billion?
The quick answer is yes. In fact, at that price it is a bargain.

While automotive manufacturers are eyeing HERE undoubtedly, it is obvious why. But, Microsoft itself would benefit through owning HERE and could likely raise the cash to make it happen easily. Don’t forget that Facebook also depends on HERE mapping and it too would achieve a bright future along the lines of basic mapping, but also supporting it’s own Facebook augmented reality effort. Apple could finally resolve it’s own mapping needs through ownership of Nokia HERE division. At the same it would open up many new opportunities for mobility devices like watches and sensors the company is working on.

A key take-home-point if the division is sold depends on the buyer having the mapping knowledge to ensure that the quality of data is maintained, otherwise it will quickly lose value and even jeopardize products is it is not maintained correctly. There is a significant potential for this service provided it expands on a number of fronts into more knowledge-based expertise – in other words – maturing. To date, Nokia seems reluctant to assess in a wider view to drive innovation others are already embarking upon in the early stages.

Given the purchase price of this technology originally, and the advancements and investments that have already taken place, the current market share and the ‘potential’ as described above, it would seem the potential value of this division is far greater, perhaps as much as $15-20 billion by the right buyer.

This is the real stuff – as compared to some of the values paid for stocks with uncertain futures in recent times.