Company
Portfolio Data
KITWARE INC
UEI: DK6LPWMS5LP5
Number of Employees: 166
HUBZone Owned: No
Woman Owned: No
Socially and Economically Disadvantaged: No
SBIR/STTR Involvement
Year of first award: 1998
143
Phase I Awards
91
Phase II Awards
63.64%
Conversion Rate
$26,458,676
Phase I Dollars
$104,801,292
Phase II Dollars
$131,259,968
Total Awarded
Success Stories
See what our company has achieved through SBIR/STTR funding.
Awards

Accelerating Design of ML and AI Experiments in Scientific Simulation
Amount: $149,274 Topic: S17
Artificial intelligence (AI) and machine learning (ML) have demonstrated incredible success across industry and scientific fields. This impact is felt across scientific simulation domains, where AI and ML techniques are being used to explore complex patterns, improve accuracy of physics solvers, and accelerate time to insight. However, “black box” integration of AI and ML tools into simulations has yet to demonstrate significant impact, leading practitioners to develop their own implementations and integrations to satisfy their workflow needs. In this proposal, we seek to reduce the barriers to adoption of AI and ML tools for mature scientific simulation codes. In Phase I, we will focus on demonstrating an in situ ML toolbox for coupling mature simulation codes to AI and ML tools. We will leverage our team’s expertise, applying our toolbox to two approaches to computational fluid dynamics relevant to NASA missions, large eddy simulation and Reynolds averaged Navier Stokes, which each have unique requirements and methodologies for incorporating AI and ML for improved model accuracy. By exploring these two regimes of CFD, we seek to highlight the flexibility of our approach to empowering mature simulation codes with cutting edge AI and ML tools, aiming toward other physics domains of interest to NASA in future work. Phase I funding will support this research and development effort, laying the groundwork for improved access to new AI and ML tools, and improved infrastructure for simulation users to experiment with these cutting edge tools.
Tagged as:
SBIR
Phase I
2024
NASA

SBIR 136 - KITWARE: Viime Extract: An AI-powered Knowledge Graph Extraction System for Metabolomics
Amount: $299,999 Topic: 136
Viime Extract is a web application for generating and curating knowledge graph data, focused on metabolomics and infectious disease use cases. Research papers and tabular data is fed into the system and analyzed with large language AI models (LLMs). The resultant JSON-LD data is matched against known ontologies (e.g. KEGG) and existing graph databases (e.g. Reactome). Extracted knowledge is also stored in a central graph database and presented to the user in an intuitive interactive visualization for validation and correction. In addition to providing a novel integrated, easy-to-use workflow, Viime Extract incorporates AI prompt and AI model selection tools to enable customization and optimization of automated results over time.
Tagged as:
SBIR
Phase I
2024
HHS
NIH

MARINE MINED: FEW-SHOT LEARNING FOR UNDERSEA MINE COUNTERMEASURES (MCM)
Amount: $139,999 Topic: N241-025
We present Marine Mined, a novel system for automated target recognition (ATR) that will support Naval mine countermeasures (MCM) operations through advanced few-shot learning techniques. Our multifaceted approach involves the intelligent creation of a sonar-centric foundation model using abundantly unlabeled available data from multiple domains, drawing inspiration from human-like ATR capabilities where insights from one domain are easily leveraged to aid the interpretation of another. Additionally, we will leverage insights from acoustic physics to guide our network design and training schemes by integrating alternative acoustic-centric data representations and metadata into the network, which is data often used by analysts to interpret sonar imagery but discarded by current methods. Further, we will utilize the AirSAS system at the University of New Hampshire (UNH) to generate sonar data to augment the data-starved nature of the problem. Finally, we introduce a continual learning scheme that retains existing target performance during the addition of new classes, eliminating the need for entire network retraining. Demonstrated success in DARPA, ONR, Task Force Ocean, and NATO projects validates our approach for few-shot learning, which can be deployed in the Government’s GATR framework. The sonar-specific foundation models developed will be freely available, subject to government permission.
Tagged as:
SBIR
Phase I
2024
DOD
NAVY

Generative Unbiased 3D Semantic Segmentation
Amount: $100,000 Topic: OSD233-001
3D models are commonly generated from both multiview satellite and FMV sources using photogrammetricmethods. However, such models lack semantic labels (e.g. segmentation of buildings, roads, vegetation,vehicles, etc.) needed for further analytics. Prior work relies largely on discriminative models to classify3D surface points using local context, but discriminative models have difficulty learning complex relationshipsbetween objects that generalize to new domains. Recent work in generative models, such as StableDiffusion, trained on a combination of imagery and language have shown great power in modeling andgenerating large scale imagery in a consistent, meaningful way. It has also been shown that generativemodels used in semantic segmentation of images can generalize better to new domains than discriminativemodels. Kitware proposes to leverage these recent findings to provide semantic segmentation of 3Dsurfaces via fusion of generated 2D segmentation maps. We will leverage the fact that these 3D models arederived from 2D imagery to adapt and finetune existing image generative models, like Stable Diffusion, togenerate semantically meaningful segmentation in 2D before fusing into consistent 3D surface labels. Theresult will be 3D segmentation and detection results that generalize far better to new environments.
Tagged as:
SBIR
Phase I
2024
DOD
NGA

ARC-TAK: A TAK-ML Capability for Automated Recognition and Cataloging of Threats Via Pattern of Life Analysis to Aid in Force Protection
Amount: $1,747,890 Topic: AF233-D023
Kitware, in partnership with Raytheon BBN, Lockheed Martin, Sherpa 6, and Technergetics, are proud to propose this Direct to Phase II SBIR effort for developing machine learning models for TAK-ML client and TAK-ML server enabled systems. This work builds on a rich history of developing and deploying models for customers in the DoD/IC, as well as direct development of TAK-ML, StreamlinedML, MISTK, and MDM tools such as Watchtower. Our approach, ARCTAK, will augment security forces with the ability to catalog persons-of-interest, their activities, associations, and related incidents, across the suite of TAK security feeds. The system will be capable of providing useful person tagging, cataloging, and alerting on an isolated ATAK device in connected and denied environments. When full network connectivity allows for compute infrastructure, such as a TAK-ML Server, we will run full-population cataloging across all surveillance camera streams and sUAS video, in order to make inferences regarding the threat state of an entire urban population under observation, informing the end user devices of larger patterns, associations, and activities of the persons of interest.
Tagged as:
SBIR
Phase II
2024
DOD
USAF

Mixtape Middleware for Interactive XAI with Tree-based AI Performance Evaluation
Amount: $1,149,986 Topic: A22B-T016
Kitware, in partnership with Penn State University, is proud to propose this Phase II STTR effort to create explainable AI (XAI) middleware that can support the interactive explanation and visualization of AI decision-making systems. Our approach, MIXTAPE, will create modular and extensible software interfaces and implementations of different XAI tools for evaluating the performance of AIs in Multi-Domain Operations. For explanations, we will extend the tree-based visualizations we developed in Phase I to show perceived current states and anticipated future states of units at critical decision points. We will also develop novel explanations using neurosymbolic AI, decomposing complex reasoning strategies into more interpretable states representing agent sub-goals. To create a structured process that guides humans on how to use the provided explanations, we will extend our work on After-Action Review for AI. Our proposed tool will be validated by a set of user studies, with assessment along the axes of performance, usability, and appropriateness of trust. Additional forms of explanation and geospatial visualization can also be incorporated given the flexible nature of our framework. We will test our approach on the ARL Simple Yeho environment, but it can apply to different AI agents, wargaming scenarios, and AI testbeds.
Tagged as:
STTR
Phase II
2024
DOD
ARMY

AI/ML for In-Situ Additive Manufacturing Defect Detection
Amount: $999,999 Topic: N222-117
Additive manufacturing (AM) increases the speed and flexibility of producing complex parts at scale. The ability for U.S. manufacturers to 3D-print advanced components in-house reduces reliance on traditional supply chains and bolsters national security readiness. But, AM-produced parts can be subject to various defects, such as lack of fusion, gas entrapment, powder agglomeration, internal cracks, and thermal stress, which can alter the mechanical properties. Post-build nondestructive inspection methodologies can be used, but they greatly increase cost and production time. The ideal solution is monitoring part quality in real-time as the part is being built using in situ sensors. Kitware, along with Princeton University, proposes to bring the latest advances in deep neural network artificial intelligence and signal fusion to optimize and extend 3D metal additive manufacturing systems for the NavyÆs unique needs. Our system is a platform-independent, interactive, in-process quality assurance system that combines data collection, inspection, feedback, and critical analysis. Optimizing defect detection accuracy will improve confidence in part performance and reduce part-rejection false-alarm rates. Our proposed method builds on an existing proof of concept for in-situ defect detection and extends our capabilities to cover a wider range of builds, printers, locations, and sensors.
Tagged as:
SBIR
Phase II
2024
DOD
NAVY

Multivariate Volume Visualization and Machine-Guided Exploration in Tomviz
Amount: $199,933 Topic: C57-09a
Detailed analysis of nanoscale imaging data produced at DOEĺs modern X-ray facilities plays an important role in guidance for future experiments and successful scientific discoveries. However, increasing complexity and production of nanoscale imaging data have made this analysis more difficult, and, in some situations, infeasible. Multivariate volumetric data, which contains more than one channel per voxel, has become increasingly prevalent in important industries such as battery technology and manufacturing. Due to its multidimensional nature, however, multivariate data is complex and usually requires substantial human effort to analyze. Oftentimes, each channel is visualized separately and individually. This, however, does not reveal the complex yet frequently important interactions between the channels. The recently published RadVolViz tool has already accomplished much in the way of multivari- ate volume visualization. However, the impact of these techniques may be accelerated further by making them accessible to more tools and applications. New automated analysis tools may provide additional opportunities to further facilitate multivariate data exploration and expedite the time to discovery. And, new techniques to reduce the dimensional complexity of multimodal datasets could greatly simplify their visualization and streamline their analysis. In our proposed Phase I project, we will build prototypes to improve the accessibility and analysis techniques for multivariate volume visualization which will evolve into full commercial capabilities in Phase II. We will enhance the accessibility of existing multivariate volume visual- ization techniques by integrating them into the widely used platforms VTK and ParaView. We will accelerate data exploration by designing new AI-guided tools to automatically analyze complex multivariate volumetric datasets, provide useful information, and generate informative visualization scenes. We will discover new techniques to simplify multimodal volume visualization for important trend comparisons. The work proposed here addresses the deficiencies in current platforms to grow a substantial share of this market. In particular, we are targeting firms in both the battery as well as man- ufacturing and engineering sectors that can benefit from the underlying technology to improve productivity.
Tagged as:
SBIR
Phase I
2024
DOE

Composable Digital Twins for Science Network Infrastructures using Parallel Discrete Event Simulation
Amount: $200,000 Topic: C57-03a
Research wide area network (R-WAN) infrastructures, such as the DOE ESnet, are used to move large amounts of data between experimental facilities and data centers, which imposes challenges on the network infrastructure and requires proactive strategies for resource allocation and provisioning methods. Parallel discrete event simulators (PDES), such as ROSS and ns-3, can be used to perform studies on network performance and evaluate new network configurations. However, these tools are too slow to use by network researchers in a rapid research and development environment, as even a millisecond of network traffic can take hours to be simulated in high fidelity. We propose to develop a digital twin framework built on top of a PDES network simulator that would be a safe and cost-effective tool for use by network researchers as well as network providers and maintainers. We will design the framework to be easy to use without requiring knowledge of PDES. To speed up the network simulator, we will follow a hybrid modeling approach that combines high-fidelity PDES models and coarser-grained machine learning (ML) models to fast-forward through parts of the simulation. Our digital twin framework will be built on top of the ROSS simulator. The framework com- prises 1) a Component Model Library that contains ML models of network components such as routers, and 2) an Orchestrator that will compose PDES models with ML models from the library based on the network configuration provided by the user. For Phase I, we will focus on training models for the components of the ESnet 400G testbed, but the library can be extended as necessary to model other networks. We will also create a visualization dashboard that can be used to analyze the simulated network performance, as well as the performance of the underlying network simula- tor, which will be useful in ensuring the digital twin meets performance requirements. Finally, we will evaluate our digital twin with an application middleware experiment that we can perform on the physical ESnet testbed as well. After Phase II, we plan to expand beyond research network providers such as the DOE to cloud service providers. Cloud service providers support applications, such as streaming analytics services, that require high throughput and low latency. These applications are used to make decisions using real-time information, so increased latency can have financial consequences for the users of cloud services. As an established open-source technical computing software company, Kitware, Inc. is deeply familiar with the needs of commercial, scientific software and how to fulfill those needs within a profitable and sustainable business model. In addition, the team from Kitware, along with our collaborators at Rensselaer Polytechnic Institute and Argonne National Laboratory have extensive collective experience in network simulation and PDES, making us uniquely suited to develop and commercialize the digital twin infrastructure.
Tagged as:
SBIR
Phase I
2024
DOE

Obstacle Telesculptor Mapping Battlespace Obstacles in 3D
Amount: $1,999,207 Topic: A19-118
There are many challenges in safely maneuvering combat vehicles through a battlespace. Path planningrequires detailed knowledge of the terrain and any physical barriers as well as the location of dangers,such as landmines or enemy positions. Small UAVs are now convenient for battlespace reconnaissanceusing visible and thermal cameras, but the raw imagery needs to be quickly converted to 3D environmentmaps showing physical obstacles and threats. Kitware proposes Obstacle TeleSculptor to address this need.Obstacle TeleSculptor extends our existing open source TeleSculptor application for 3D scene modelingfrom UAV-collected imagery. In addition to reconstructing the environment, Obstacle TeleSculptor willmap observations from multiple views and multiple sensors onto the 3D surface to enable detection andclassification of threats such as landmines or enemy vehicles that should be avoided. The system will usedeep networks to learn features that optimize these detection results using all available observations. Itwill also be open and configurable to allow reuse of the accumulated features for other applications. Thecombined map of 3D terrain and detected obstacles from timely UAV imagery will provide the criticalintelligence needed to make informed decisions about battlespace navigation.
Tagged as:
SBIR
Phase II
2024
DOD
ARMY