You are here

Automating Procedural Modeling of Buildings from Point Cloud Data

Description:

TECHNOLOGY AREA(S): Info Systems, Sensors, Electronics, Battlespace 

OBJECTIVE: Create scalable automated 3D building models from attributed point cloud data that render in a web browser to simulate an unknown environment. 

DESCRIPTION: Many defense customers require lightweight 3D models that can render in the browser or on mobile devices to accurately depict an unfamiliar area. While consumers take Google Maps and StreetView for granted in industrialized countries for spatial decision making and navigation, these applications rarely exist in operational areas. Lidar sensors and computer vision techniques for photogrammetrically deriving 3D data now provide the most realistic operational picture of an unknown area. While each approach creates thousands of accurate (x,y, and z) points of an object, these massive point cloud datasets currently only benefit some desktop users with the appropriate software to render the large datasets. This research will help bridge the gap. “One of our biggest hurdles is the data size of point clouds going to tiny little FOBs or ships. In reality all our [soldiers] want is a 3D model that looks as realistic as possible for them to orient themselves and plan…” Wes Roberts, Image Scientist, SOCOM NST This project will reduce the time required to generate building models from raw lidar and photogrammetrically derived point cloud data by automating a manual process through rule based procedural modeling. Procedural modelling refers to a computer graphics technique in which attributes are converted into pre-rendered artistic primitives such as walls, roofs, or any other feature at the micro building level. This approach can also leverage scene level macro attribution so that all building footprints in a residential area with per building height measurements can serve as the rule to create residential building models with accurate heights. By using a rule based approach, we can scale the modeling of many buildings at the same time-greatly reducing the amount of manual modeling. Initial research (OTA W900KK-18-9-0002 and IARPA’s CORE3D program) proves that deep learning, machine learning, and artificial intelligence can automate the detection of core building attributes from photogrammetrically derived and lidar point clouds. While these advances are critical, the next step is to convert these outputs into models. This project will add value to these prior and ongoing efforts by focusing on the final steps in the pipeline of model generation; and help transition the technology. This approach was endorsed by Joe Hosack, Chief, ISR S&T Branch USSOUTHCOM for their user needs and will benefit many more in the defense community. 

PHASE I: This phase will determine the best approach for rule based 3D building modeling and most appropriate data model for storing, transmitting, and rendering the models. Task 1: Investigate available procedural modeling techniques, the state of the art, and any gaps in converting attribution inputs into 3D models. This task will also include the preliminary task of evaluating any existing attribution file formats and explore any improvements required. Task 2: Determine if existing data models such as CityGML and transmission formats such as X3D, 3D Tiles, and gITF will fulfill the requirements of procedural modeling from point cloud attribution files. Task 3: Determine how/if the Open Geospatial Consortium’s 3D web portrayal service could be leveraged to standardize the query/delivery of data. Explore available platforms to render the models on a web browser, and determine any gaps. All data models, platforms, and formats must adhere to open standards for consideration. 

PHASE II: Phase two will leverage the findings of the initial phase to implement the best approach. Using an existing or new attribution file format that is machine readable, we will feed our procedural modeling pipeline to create hundreds of building models at the same time. The rules for each model must convert the attribute information on a per building level such as footprint, roof type, wall texture, window, or door into a 3D environment in a browser. The pipeline must be adaptable to allow various resolution of point clouds to determine the ultimate level of detail in the model. Eventually, trees, roads, and other natural and manmade objects should be included in the attribution and procedural modeling pipeline to create a full and accurate picture of the real world. 

PHASE III: Along with military applications, this research has important applications in civilian building information modeling (BIM), disaster response (emergency responders), and automated mapping approaches. 

REFERENCES: 

1: C. Becker et al, "Classification of Aerial Photogrammetric 3D Point Clouds," arXiv Preprint arXiv:1705.08374, 2017.

2:  Q. Zhou and U. Neumann, "Fast and extensible building modeling from airborne LiDAR data," in Proceedings of the 16th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 2008.r

KEYWORDS: 3D Models, Procedural Modeling, Remote Sensing, Point Clouds, Building Classification, WebGL 

CONTACT(S): 

Christopher Clasen 

(571) 557-9364 

Christopher.C.Clasen@nga.mil 

US Flag An Official Website of the United States Government