Department of Defense
November 20, 2013
Defense Health Program
November 20, 2013
SBIR / 2014
January 22, 2014
NOTE: The Solicitations and topics listed on this site are copies from the various SBIR agency solicitations and are not necessarily the latest and most up-to-date. For this reason, you should use the agency link listed below which will take you directly to the appropriate agency server where you can read the official version of this solicitation and download the appropriate forms and rules.
The official link for this solicitation is: http://www.acq.osd.mil/osbp/sbir/solicitations/index.shtml
OBJECTIVE: Seek methodologies and emerging technologies to reduce the burden on the military"s tactical networks derived from the transmission of digital medical imagery. DESCRIPTION: Recent military conflicts and humanitarian relief operations have placed new demands on healthcare providers in terms of providing medical diagnoses for injuries from remote locations as part of an ever-expanding telemedicine program supporting our Operational Medicine missions. Additionally, as new digital modalities are added to the force structure supporting medical Roles of Care 1-3, the medical impact on the tactical network architecture and throughput has called into question its ability to support a modern deployed healthcare system which is heavily reliant on digital imagery and telemedicine technologies. While the military"s communication communities and associated acquisition programs struggle to keep up with demand for bandwidth support to the Operational Force, the military medical communities must place equal emphasis on reducing their throughput requirements in order to ensure a robust and successful telemedicine program. Imaging technologies such as CT, US and XRAY can enable clinical decision making without unnecessary medical evacuation to higher Roles of Care, where a broader range of in-person expertise will be available. Telemedicine and Teleconsultation face several challenges in the Operational Medicine environment: The first challenge lies in the finite limitations of the tactical networks supporting Roles of Care 1-3 and the competition for those networks by numerous bandwidth hungry military communities. Digital medical image file sizes can be relatively large in their current state of commercial availability and place burdens on the tactical networks that have the potential of putting other military communities at risk of mission failure. This fact reduces the warfighter"s prioritization of medical data transmissions and jeopardizes the medical community"s ability of guaranteeing a robust telemedicine capability. Since the tactical networks are a shared resource amongst all Operational military communities, any effort to reduce the medical burden on those networks will subsequently increase the likelihood of guaranteeing a successful telemedicine program. Therefore, the major goal of this SBIR is to research ways to reduce the medical burden on the military"s tactical networks through a variety of means outlined earlier. The second challenge is that the military"s tactical networks are a compilation of various technologies ranging from data-sharing tactical radios to satellite transmissions depending on where the medical unit is arrayed on the battlefield. Beyond the varying network modalities across the Operational space, networks can be further divided between high-side (secure) and low-side (unsecure) transmissions. All of these factors become impeding factors to ensuring a robust telemedicine capability and must be accounted for in this research proposal. Proposed methods, network policies, and technological solutions must be able to work with multiple vendors of existing image management products. Any new algorithms for improved compression of digital images should improve upon the current state-of-the-art for lossy and lossless technologies, which depending upon the type of image, is currently at 30:1 for some image file formats. New proposed technology compression algorithms must still ensure diagnostic image quality. The government is particularly interested in new entropic compression algorithms. Additionally, solutions should provide for secure and/or anonymized data transfer of medical images. Lastly, solutions should support best medical practices for military Operational Medicine across a variety of missions including full-spectrum, sustainment, low intensity conflict (LIC) and humanitarian relief. Applicants should be knowledgeable about other existing Digital Imaging and Communications in Medicine (DICOM) standards (http://medical.nema.org/), and related efforts to improve image management processes and exchange, such as those promoted by the Health Information Management Systems Society (HIMSS) Integrated Health Environment (IHE) Profiles (http://www.ihe.net/). Likewise, the American College of Radiology is engaged in clinical research (http://www.acr.org/Research/ClinicalResearch). Applicants should also consider past work on image management use cases set-forth by the Healthcare IT Standards Panel (HITSP) work initiated by the Department of Health and Human Services (DHHS) Office of the National Coordinator for Healthcare Information Technology (C-41, IS-107, CAP-00, CAP-129, TP-49, TP-89, TN-905, RDSS-59, among others). Additionally, the awardees should consider the applicability of the NwHIN (http://www.healthit.gov/policy-researchers-implementers/nationwide-health-information-network-nwhin) Standards and Interoperability Framework (http://www.siframework.org/implementation.html); NwHIN CONNECT (http://www.connectopensource.org/) and DIRECT (http://wiki.directproject.org/) in designing the architecture. These constructs are now being http://wiki.directproject.org/ operationalized through a new public/private consortium known as Healtheway, Inc. (http://healthewayinc.org/). The government realizes that this research may invent novel concepts and technologies not yet considered in existing standards. Given this potentiality, U.S. Army TATRC will make a best attempt effort, with the concurrence of the offeror, to introduce the new technology to existing standards group for evaluation and possible adoption as a new standard. PHASE I: Phase I proposals should develop a conceptual framework of new methods or technologies aimed at reducing the medical impact on military tactical networks New methods or technologies must enable the remote diagnosis of visual, CT, US and XRAY images. Other clinical imaging modalities are desirable, but not necessary at this time. New methods or technologies are expected to be compatible with existing vendor image hardware solutions, which typically require use of DICOM standards. Proposed solutions must address the following key technical features: (1) functional data compression technology ideally greater than 30:1 for all modalities and image types, while retaining diagnostic quality; (2) a strategy for secure and/or anonymized data transfer of medical images and (3) a software routine and strategy by which telemedicine/teleconsultation can be performed using portable devices. PHASE II: Phase II efforts would be directed at taking the conceptual framework developed in Phase I, and developing a prototype technology solution that combines all of the desired features from Phase I. Phase II applicants must demonstrate that the technology can offer high levels of sensitivity and specificity for militarily-relevant injuries as it pertains to (1) data compression and (2) telemedicine and teleconsultation on portable devices. Offerors must also conduct robust testing to demonstrate interoperability with hardware imaging solutions from multiple vendors, using either a real physical environment, or virtual simulation. Phase II proposal submission offerors must demonstrate how they will obtain FDA approvals ahead of commercialization of the technology, and should be cognizant of the new FDA regulations on medical mobile devices. It is anticipated that human data will be used for validation, and members of the radiological community will participate as part of the Phase II team. PHASE III DUAL USE APPLICATIONS: Medicine in austere environments occurs beyond the battlefield. New methods and technologies would be of benefit to support goodwill, humanitarian disaster relief medical efforts that require state-of-the-art telemedicine capabilities. Equally important, such technologies could enhance the capabilities of first responders since they will be able to interact with physicians or other medical experts at the scene of the incident before arrival at the point of care.
OBJECTIVE: As a first objective, conduct basic and applied research surrounding new technologies to computer-generate completely synthetic, complex, medical text narratives for subsequent use in clinical informatics research and healthcare information technology feasibility studies. As a second objective, conduct basic and applied research surrounding new technologies to computer-generate completely synthetic medical images for subsequent use in clinical informatics research and healthcare information technology feasibility studies. If the research is successful, computer-generated synthetic medical text and images could then be made available to the government and/or other private researchers through commercial or open source licensing agreements. DESCRIPTION: What is the problem? Medical research studies, particularly those involving medical informatics or health care information technology evaluations, are hampered by the lack of available, timely, high quality structured, unstructured, and imaging data. Similar research studies conducted by academia and private industry labs face the same challenge. Acquiring such data from Military Health System Automated Systems of Record, Commercial Electronic Health Record (EHR), or Clinical/Business Intelligence systems is a timely process. It typically requires: (1) obtaining permission from research subjects, (2) submission of detailed Human Subjects Research Protection protocols, (3) First and Second Level Institutional Review Boards (IRB) Approval, (4) obtaining Data Use Agreements, (5) Privacy Board Approval, and (6) hiring experts to pull and de-identify or anonymize electronic health records data. This process can typically take up to a year to complete. Even if real data became available in a timely fashion, there is always the chance that it would not be completely de-identified or anonymized. This is particularly true in the case of complex patient histories containing medical text narratives that may mention familial relationships or other unique facts that could be pieced together to identify the patient. De-identification and/or anonymization is typically conducted through computer-algorithms that are not 100 percent trust-worthy. Even in the case of additional human review after computer-assisted de-identification or anonymization, complete de-identification and/or anonymization cannot be guaranteed. Human review is typically limited by available resources to read large amounts of patient histories, and is generally not available for research data sets involving thousands of records. These constraints pose additional challenges and risks involving protecting patient privacy as required by HIPAA and Human Subjects"Review. Data breaches can lead to patient medical and financial identity theft, public relations concerns, and subsequent class action law suits. Obtaining imaging data is also difficult. As one researcher indicates,"The collection of medical image data for research can be an expensive time consuming task. Positron emission tomography (PET), x-ray computed tomography (CT), and magnetic resonance imaging (MRI) systems can easily cost over a million dollars. They may require dedicated staff, maintenance contracts, and access to expensive supporting equipment such as a cyclotron. In addition, collection of data for large studies may take months. The process is complicated by equipment schedules, organization of volunteers/subjects, use of potentially harmful electromagnetic radiation, radiopharmaceuticals, and contrast agents, as well as patient privacy rights. These difficulties limit the availability of clinical data, especially for smaller academic research programs. Creating software models of the human anatomy and imaging systems, and modeling the medical physics of the imaging acquisition process can provide a means to generate realistic synthetic data sets. In many cases synthetic data sets can be used, reducing the time and cost of collecting real images, and making data sets available to institutions without clinical imaging systems."In the envisioned ideal state, medical research studies could be expedited if synthetic medical text narrative data and images could be computer-generated. Having synthetic data would forgo the need for IRB and DUA approvals because the data would be completely made up, and have absolutely no ties to any real patient records. Furthermore, there would be no chance of HIPAA violations, patient medical or financial identity theft, public relations concerns, or any class-action law suits. Why is it hard? Automatically generating synthetic health data is not trivial. Healthcare data is complex and synthetically generating such different types of data may require different technologies which arise from artificial intelligence work. For example, Semi-structured or free text clinical narratives can contain: A clinician"s free text or semi-structured assessment of the patient"s history, probable diagnosis, and recommended procedures: radiology text reports pathology text reports operation text reports discharge narrative summaries medical boards and disability profiles Such unstructured and semi-structured data could contain: complex demographic information regarding the patient and those whom he/she interacts multiple diagnosis or problems and their progression over time patient and family proteomic and genomic history patient exposures to major trauma and related procedures patient environmental or toxin exposures patient nutrition, exercise, and sleep data key social and life events over time that impact health status Imaging data is also complex and may include radiographs, MRI studies, CT scans, cardiology, nuclear medicine, and ultrasound studies, to name a few. In order to use such synthetically generated data in subsequent clinical research studies or healthcare IT proof of concept evaluations, it must be valid, reliable, complete, timely, and clinically-relevant over the life of a patient. Determining whether the synthetically-generated data is of sufficient quality for use in subsequent clinical research requires extensive human testing and review. In lieu of obtaining such real data for validation of synthetic data, the synthetically generated computer data would be validated solely by human subject matter expertise and knowledge of the real data. The research use case for the data would dictate what quality of synthetic data would be acceptable for use. For example, basic healthcare IT system functional testing and feasibility studies may be able to be validated with lesser quality synthetic data than which might be required for validating population health studies, which would require very high quality synthetic data. There would be additional challenges to ingesting such synthetic data into healthcare information technology development and testing environments, as government and commercial EHRs use different underlying logical and physical data models. Ideally, synthetically generated data should in a format capable of ingestion into an electronic health record, and should consider the various standard component formats (C.XX), specified by the Healthcare Information Technology Standards Panel (HITSP), and the Continuity of Care Record (CCR) and Clinical Document Architecture (CDA) standards, or even new efforts such as HL-7 FHIR, or the OMG Clinical Information Modeling Initiative backed by Mayo Clinic and Intermountain Healthcare. DICOM standards would apply to imaging studies. Research in this field is somewhat hampered by the number of qualified artificial intelligence individuals available, although there is promising research underway with respect to generating synthetic medical data. How is it solved today? There are very few open-source or commercial synthetic medical data sets available today for use in clinical research. Available synthetic data sets are limited to very specific purposes. Most importantly, there are no known large, general purpose, complete sets of synthetic medical narrative text and imaging data that can be incorporated into EHRs for use clinical informatics studies or healthcare information technology feasibility studies, although some research has started in this regard. NIH did release an initial Observational Medical Dataset Simulator (OSIM1) in 2009 which was used to generate datasets with millions of hypothetical patients with drug exposure, background conditions, and known adverse events for the purpose of benchmarking methods performance. Continued research has resulted in the development of a second-generation simulated dataset procedure, known as OSIM2. OSIM2 represents an alternative design to accommodate additional complexities observed in real-world data, including advanced modeling of the correlations between drugs and conditions. OSIM2 allows for more direct comparisons between simulated data and real observational databases, and should enable greater methods evaluation by allowing assessment of how methods accommodate these complex interrelationships. Anna L Buczak,1 Steven Babin,1 and Linda Moniz report in BioMedical Central a novel methodology for"generating complete synthetic EMRs both for an outbreak illness of interest (tularemia) and for background records. The method developed has three major steps: 1) synthetic patient identity and basic information generation; 2) identification of care patterns that the synthetic patients would receive based on the information present in real EMR data for similar health problems; 3) adaptation of these care patterns to the synthetic patient population". The research did generate EMRs, including visit records, clinical activity, laboratory orders/results and radiology orders/results, but it was limited to only 203 synthetic tularemia outbreak patients. Lombardo and Moniz report on a similar method for generating synthetic data to study disease outbreaks. The Partners healthcare i2b2 challenge has made available several public data sets for Natural Language Processing Studies. The U.S. Army Medical Research and Materiel Command, Telemedicine and Advanced Technology Research Center (TATRC), through its contracted partners, is generating some structured electronic health record data such as patient demographics; lab, pharmacy, and radiology order and results; and disease and procedure codes. This research effort is also auto-generating very simple patient histories consisting of several lines of text notating that a patient may have presented for a clinical encounter or admission or has history of a particular disease or procedure, but these short narratives are insufficient for natural language processing studies and other clinical informatics studies. The government is aware of ongoing efforts to synthetically generate business intelligence stories and sports stories. Other research efforts are underway in the Netherlands to create compelling, entertaining, and believable stories by focusing on plot creation, discourse generation and spoken language presentation. There are numerous web sites emerging which can generate children"s stories. The underlying technologies might be applied to generating free or semi-structured medical text. The exact technologies employed are unknown and may be proprietary but most likely center around use of Latent Semantic Analysis to generate synthetic text narrative data. Some research underway to apply computer graphics technologies to create entertainment movies and even virtual worlds, which might be now applied to generating synthetic medical images. Some research involving synthetic medical image generation is underway by academic institutions and companies in the Rochester, NY area, which is major worldwide center of image excellence. The exact techniques employed by these efforts are unknown, although some research may be building upon the generation of synthetic data involving earth geographic sensor data . Proposed Solution Summary: The need for synthetic data for use in clinical research and healthcare IT feasibility studies is well documented. This SBIR topic is intended to build upon the aforementioned research efforts to generate complex, story-like, unstructured or semi-structured electronic healthcare data for use in medical research. In addition, the research will develop toolsets to generate synthetic digital images (XRAY, CT, MRI, Ultrasound, Pathology and Dermatology), using Computer-Generated Imagery. The government is interested in evaluating innovative proposals outlining various approaches to generating such synthetic imaging and complex text narratives, which either build upon existing approaches, or are entirely new approaches. Such synthetic data must be generated in a way that does not rely on any form of past data that was real, de-identified, or anonymized patient medical data, and should be derived"ab-initio". The research should also include novel methods to compare the validity of the synthetic data to real data, for use in clinical informatics research and health information technology feasibility studies. Boundaries to Consider: The quality of the synthetic data to be generated will be dictated by the current clinical informatics research and healthcare information technology feasibility studies underway, some of which are known, and some of which will be determined by higher authority closer to award. The government will negotiate this aspect with the vendor. Synthetically generated data should in a format capable of ingestion into an electronic health record, and should consider the various standard component formats (C.XX), specified by the HHS Office of the National Coordinator for Healthcare Information Technology, the past work of the Healthcare Information Technology Standards Panel (HITSP), and the Continuity of Care Record (CCR) and Clinical Document Architecture (CDA) standards, and associated HL-7 FHIR and OMG Clinical Information Modeling Initiatives. DICOM standards apply to imaging studies. Once generated based on certain characteristics, the applied research should aim to find ways to effectively ingest such data into existing healthcare information technology evaluation platforms, such as the TATRC Early Stage Platform for subsequent use in healthcare IT prototype and risk reduction activities, under an appropriate licensing agreement. As a matter of background the TATRC Early Stage Platform provides a virtualized development and test environment containing current DOD electronic health record components (AHLTA, CHCS and Essentris in the future), which can then be used to support third party application development and other government funded clinical informatics research projects awarded to other government research recipients. PHASE I: In Phase I, the offeror will outline the various technical approaches which exist for developing high quality synthetic medical images and complex narrative texts, which can then be used in clinical informatics research and health information technology studies. During Phase I, the vendor will work with the Government to understand the various clinical informatics and health IT research underway, for which the synthetic data is to be used. This will guide further decisions regarding the quality of the data which is to be generated. Phase I will be largely centered on whether it is even technically feasible to generate the complex narrative medical text necessary for use in clinical informatics research given the current state of artificial intelligence research. At the end of Phase I, the government will need to get a sense of whether it should even proceed with Phase II. Therefore, it is important that the offeror provide a final report that presents the relative advantages, disadvantages, and tradeoffs of each technical approach for generating synthetic data, and how each approach builds upon past knowledge concerning synthetic generation of data, and addresses current gaps. The offeror may propose how it would improve upon existing approaches, or develop entirely new approaches to generate synthetic medical images and complex narrative medical text. It is also important to note that Phase I will extend beyond this tradeoff analysis. The offeror will also work with the government to develop use cases and configuration parameters surrounding the generation of synthetic data, which can demonstrate the feasibility of commercializing such technology and/or applying it in military medical research settings. As part of Phase I, the offeror will develop evaluation criteria to judge the quality of the imaging and narrative free or semi-structured text data for use in research, based on the use cases envisioned. Research would also center around gaining an understanding of the complexities of generating such data for potential ingestion into the U.S. Army Telemedicine and Advanced Technology Research Center Early Stage Platform, or other similar government lab platform, which will include the DOD Electronic Health Record (currently AHLTA, CHCS, and Essentris, as well as commercial and open source Electronic Health Records that the government may be considering for acquisition. PHASE II: The Phase II effort, dedicated towards creating a new toolset, or configuring an existing toolset, to generate approximately 10,000 medical images and complex medical narrative texts, that can be ingested into the TATRC Early Stage Platform (ESP), or otherwise be made available to TATRC-funded research partners. The vendor should propose whether it will make these data sets available under an open source or commercial licensing arrangement, and at what cost. Phase II will include a qualitative and quantitative comparative analysis of the quality of the synthetic data set as judged by government subject matter experts. During Phase II, TATRC will work with the awardees to introduce their technology to particular functional sponsors who might make use of the data under the proposed open source or commercial license. This would provide a potential technology transition route in Phase III in to a military medical acquisition office, perhaps using a Military Health System or DOD-wide enterprise licensing agreement, but it is not guaranteed that this would occur. For those vendors considering releasing the toolsets of synthetically generated data set to the open source community, TATRC will introduce the awardees to OSEHRA. PHASE III DUAL USE APPLICATIONS: Phase III efforts would be aimed at bringing the development of synthetic medical images and narrative medical texts to a state where it represented the equivalent of real data, and could be used reliably to conduct medical informatics research. At the end of Phase III, the SBIR recipient may be able to continue to license the tool sets and generated synthetic data to JPC-1, TATRC, or other military medical customers and TATRC specified partners for a particular time period. Outside of the military medical world, the offeror might be able to license the data to commercial vendors of electronic health records, health data warehouses, or health information exchanges for use in furthering development of these technologies. As one example, the Veterans Administration and/or HHS might also be interested in the use of such medical synthetic data sets for use in their research programs. Offerors should note that SBIR Phase III refers to work that derives from, extends, or logically concludes effort(s) performed under prior SBIR funding agreements, but is funded by sources other than the SBIR Program. Phase III work is typically oriented towards technology transition to Acquisition Programs of Record and/or commercialization of SBIR research or technology. In Phase III, the small business is expected to obtain funding from non-SBIR government sources and/or the private sector to develop or transition the prototype into a viable product or service for sale in the military or private sector markets.
OBJECTIVE: Design, develop and deploy a mobile application which provides sleep hygiene training feedback and cueing to improve sleep quantity and quality. DESCRIPTION: The U.S. Army Surgeon General has defined the Performance Triad as a component to improve the readiness and resiliency of U.S. Army personnel (Bermudez, 2013; USAPHC Public Affairs Office, 2012). The components of the Triad are Activity, Nutrition, and Sleep (ANS). A service member"s health status can affect his/her function and quality of life, and is critical to force readiness. Wellness tracking and feedback may provide a means to incentivize and encourage health-promoting activities in Soldiers, civilians, and families. Mobile platforms (e.g., smartphones, tablets) and their software apps are in widespread use across military and civilian populations; health and wellness applications are abundant. The availability and acceptance of these technologies presents an opportunity for the development of a mobile application that gives service members access to a personalized tool for promoting lifestyle changes that can lead to improved health among Army team members. The objective of this SBIR is to solicit concepts for the design, development, and deployment of a mobile application to investigate the efficacy of feedback and awareness of sleep quality in improving sleep quality. Challenges that need to be addressed are: 1) acquiring accurate sleep metrics unobtrusively over longer periods of time (weeks to months); 2) analyzing those metrics to identify the best behavioral targets for sleep hygiene improvement in a given individual; and 3) implementing an effective methodology (e.g., gamification, social/peer engagement, reinforcement) for inducing positive behavioral change. The app should track relevant data and provide personalized feedback to address the following sleep hygiene training rules: 1) Stick to a Consistent Wake-Up and Bedtime Every Day of the Week; 2) Use the Bedroom Only for Sleep and Sex; 3) Resolve Daily Dilemmas Outside of the Bedroom; 4) Establish a Bedtime Routine; 5) Establish an Aerobic Exercise Routine and Stick To It; 6) Create a Quiet and Comfortable Sleep Environment; 7) Don"t Be a Clock Watcher; 8) Don"t Consume Caffeine Within 4 Hours of Bedtime; 9) Don"t Use Alcohol as a Sleep Aid; 10) Don"t Take Naps During the Day (If You Have Trouble Sleeping at Night); 11) Don"t Smoke Cigarettes Immediately Before Bedtime; and 12) Get Out of Bed and Go to Another Room If Sleep Does Not Come in 30 Minutes (Caldwell & Caldwell, 2003; Dunbar, 2013). Sleep metrics and these rules are leveraged into positive improvement in sleep hygiene should be a significant focus of the proposed project. PHASE I: Conceptualize, design, and build a solution for a mobile application which allows for tracking of a person"s sleep habits and data as well as cues and feedback to encourage behavior modification to improve sleep quality. Required Phase I deliverables will include: research design for validating the app; prototype with limited testing in demonstrating proof-of-concept, research plans for validation testing; and commercialization strategy including regulatory plans if necessary. The solution should be developed on a platform that is compliant with regulations around health care data security and encryption and the proposal should have a plan to address any FDA issues that the proposed solution could engender (USDHHS, 2013). Literature and market review should be done as part of the proposal background information and not as a task to be executed during Phase I period. Applications should clearly describe a specific proposed solution and conceptual development of the solution should be performed during the proposal writing process and not as part of Phase I tasks. Although it is anticipated that in vitro testing and consultation from subject matter experts will occur there should be no formal human use testing proposed or executed during this 6-month Phase I period due to requirement of second level DoD (Department of Defense) review, which generally adds more time beyond the 6-month Phase I period. PHASE II: Complete a fully functional, deployable prototype of the components demonstrated in Phase I. Assess the proposed solution in controlled clinical field trials. The trial should be designed to assess the benefit of sleep hygiene training and the application on sleep quality. Applicants are encouraged to seek samples with high military relevance or translation potential. PHASE III DUAL USE APPLICATIONS: Phase III efforts should be focused towards technology transition, preferably commercialization of SBIR research and development. This should include assisting the military in transitioning the technology to widespread deployment and use as well as plans to secure funding from non-STTR/SBIR government sources and /or the private sector to develop or transition the prototype into a viable product for sale in the public and/or private sector markets.
OBJECTIVE: Develop a commercial; off the shelf test for daily assessing an individual"s biochemical modality for weight loss or gain potential before the weight change is observable (as measured on a scale in pounds). DESCRIPTION: Problem: Obesity in soldiers impacts operational readiness of personnel, increases in health care costs for treating obesity in active duty & retired soldiers, increases manpower costs to recruit and train personnel to replace soldiers who are discharged because of weight management issues, and increases the emotional and mental stress to soldiers struggling to manage their weight. According to a study performed by the Armed Forces Health Surveillance Center between 1998 and 2010, the number of soldiers deemed overweight more than tripled. In 2010, 86,183 troops, or 5.3 percent of the force, received at least one clinical diagnosis of obesity. As such, the military is now re-examining its training programs and is driving commanders to discharge soldiers determined unfit to fight due to obesity. Fifteen times more troops were discharged from the US Army this year due to obesity than five years prior. With scores of recruits unfit to serve due to the extra pounds, the country"s military leaders have deemed it a national security concern."A healthy and fit force is essential to national security,"Cmdr. Leslie Hull-Ryde, a Pentagon spokeswoman, told the Post."Our service members must be physically prepared to deploy on a moment"s notice anywhere on the globe to extremely austere and demanding conditions."Figure 1: (not shown) A graph of number of service members diagnosed as overweight from 1998 to 2008; the line is stable (no slope) at ~20,000 members until 2002 when it trends upward at a 45 degree angle (slope of 15,000 additional members every 2 years). Source: USA Today using DoD data. For example, the Army kicked out 1,625 active-duty personnel for being out of shape during the first 10 months of the year nearly 16 times more than in 2007, the peak of wartime deployment cycles, the Washington Post reported. An article written by the Associated Press describing the overweight soldiers in our military (titled"Are U.S. Troops Too Fat to Fight?"), discusses the overweight trends of the active duty and reserve military, but also the recruits who are too heavy to enter into the military. The military community has always been a cross section of society -- good or bad. These days, as our country increases in size, it is only a matter of time before the military shares the traits of obesity and associated illnesses such as diabetes, heart disease, and some forms of cancer to name a few. According to a 2007 article in the American Journal of Health Promotion, the Department of Defense spends more than $1.1 billion annually for medical care associated with excess weight and obesity. The increase in health care costs do not end when active duty does either. The Veterans Affairs health system increasingly is strained by vets piling on pounds and developing weight-related diseases like diabetes. They tend to be sicker than the general population. More than 70 percent are overweight and 33 percent are obese, said Richard Harvey, a health psychologist at the VA Center for Health Promotion. Pain is the biggest reason they give for not exercising, and 31 percent say a disability prevents it, he said. About 20 percent of veterans have diabetes, compared to 7 percent to 8 percent of the general population Solution: Measuring weight on a scale is the standard method for determining whether an individual has gained weight. However, scales are often un-calibrated and only register significant, observable weight gains of several pounds. In addition, weight gain may be inaccurately attributed to water gain, muscle mass increase, or a menstrual cycle (in women) that fluctuate daily and make trending of weight changes difficult. Lastly, trending an individual"s weight change is best measured and compared to measurements taken at the same time and conditions each day, which is difficult in military settings. Overall, the best this standard method could ever accomplish is to indicate a significant weight change after it has already occurred. A better method would provide a soldier an indication that they may be gaining weight before they actually gain weight, while they can still take action to prevent the weight gain. One potential solution is to provide soldiers with the status of fatstoring metabolic processes that are occurring inside their bodies that would allow for the correction of poor eating and exercise habits or reinforcement of effective eating and exercise habits before weight gain is measureable. One method would be the use of a simple, reliable, non-invasive and inexpensive test that when applied, would indicate whether an individual"s metabolism is either storing fat, burning fat, or is adequately balancing energy intake to output. When this test is complemented by nutritional education and used by the soldier before eating, it would either discourage them from consuming potentially fattening items or reinforce good eating habits depending on the results of the test. In addition, utilizing a test indicator between meals would encourage an individual to exercise more if a negative indication is realized as well as strengthen willpower to not snack. Operational Application: this new capability would be utilized by members to make better, more informed intervention decisions. The first detectable indication of a military member's weight gain is when superiors notice that the individual's uniforms appear tight. In those situations, the supervisor is alerted and will schedule with the member. Part of the counseling will include a recommendation to see a nutritional expert (either at a clinic, gym, or referral to civilian care). In addition, the individual may self-report to a nutritional expert. Also, an individual failing the waist measurement or a body mass index standard will be referred to a nutritional expert as part of the weight management program. The nutritional expert will provide them with the usual instruction and information, but will also be able to provide the member with a tool to check their body's current metabolic processes to identify if their weight gain metabolic processes are switched on or off. This simple saliva test would be accomplished before the individual has a meal to help reinforce good dietary behaviors or to make dietary adjustments before they eat. This feedback is provided immediately, allowing the member to make better decisions before additional weight gain occurs and is detected (via measurement on a scale). Active duty, dependents and retirees who are diagnosed as obese would be prescribed this test to be accomplished by the individual as part of the treatment. Members on the weight management program may be required to record the time, date and results of daily measurements to accompany recordings of caloric intake to enable trending. This capability will be commercialized and marketed to the public, with the DoD holding the IP rights, that will allow anyone to purchase the test over the counter. The test will be designed to be administered inconspicuously to allow individuals to test themselves in public, discreetly and privately. Scientific Approach: A biomarker in the saliva could be adapted and commercialized to provide an effective indicator for assessing fat-storage metabolism processes. Saliva is being researched as diagnostic fluid of the future as it is a usually quick, uncomplicated, and non-invasive method for sample collection while still providing usable information on metabolic processes. Salivary biomarkers have been successfully integrated into developmental and behaviorally-oriented research. One such biomarker is Adiponectin, a hormone secreted by fat cells that could be utilized as a biomarker for potential weight gain (mentioned here as an example). Adiponectin (Ad) is an adipocyte-derived hormone that plays an essential role in regulating insulin sensitivity, inflammation, and atherogenesis. In adults, adiponectin levels in the blood are inversely correlated with body fat percentage and are significantly lower in obesity. In other words, the more obese the patient, the lower the adiponectin levels in the blood. Levels of adiponectin may help to estimate the risk of obesity related diseases and conditions such as arteriosclerosis, coronary arteritis, type II Diabetes, peripheral artery disease, insulin resistance, and cardiovascular, heart and coronary diseases. Weight reduction significantly increases the circulating levels of adiponectin. Standard Adiponectin testing is performed on a blood sample normally drawn from a vein in the patient"s arm. Adiponectin Saliva Test: Levels of some hormones in saliva change in a fashion similar to that in plasma in response to a disease or physiological condition. Since saliva is an easy to obtain biological fluid, measurements of salivary hormonal changes are preferred in diagnoses and treatments. Therefore, it is of interest to examine the nature of salivary Ad. While there have been two publications in the literature reporting presence of Ad in human saliva, the nature of salivary Ad has not been characterized for possible use as a biomarker. The presence of Ad inhibitor(s) in saliva that co-eluted with the dimeric form of Ad in an assay may lead to underestimation of Ad in saliva. To effectively use Ad as a reliable method, a method must be developed to separate adiponectin from inhibitors of adiponectin-anti-adiponectin binding before assay. Other biomarkers in saliva may be a more effective and accurate indicator for assessing fat-storage metabolism processes and will need to be researched & developed. Figure 2 & 3 (not shown): a graphic of an individual using a possible configuration of commercial product (configured to look like a toothpick) that has biomarker colorimetric detection coating embedded in the end as a biomarker test for weight gaining metabolic processes; in figure 2 the color green appears to indicate the absence of biomarker(s) in an individual"s saliva and that the fat-storage metabolism processes are not occurring and allows for behavioral reinforcement; in figure 3 the color red appears to indicate the presence of biomarker(s) in an individual"s saliva that indicate fat-storage metabolism processes are occurring and allows for behavioral modification PHASE I: Phase I work will involve research to identify potential biomarker(s) in saliva that will allow for the design of a colorimetric test for determining the presence or absence of the weight change metabolic biomarker(s). If adiponectin (Ad) in human saliva is selected as one of the biomarkers, the nature of salivary Ad will need to be characterized. The colorimetric, saliva-based test to be developed must be non-toxic and disposable. The conclusion of the phase I project is the successful proof-of-concept demonstration of the biomarker in a lab setting using human saliva/simulant with optimal concentrations of biomarker(s). Appropriate controls will be used and target accuracy is 75% to 90% accurate identification of weight gain biomarker(s) presences/absence. PHASE II: Phase II work will optimize and validate the biomarker saliva test developed in the phase I project. Refinement and optimization of the manufacturing processes in preparation of mass production and commercialization. The detection levels of the presence/absence colorimetric test will be optimized to minimize the number of false negative and false positive test results for typical adult saliva. If necessary, the FDA approval pathway should be outlined and IRB oversight may be needed. Validation may include comparisons of detected levels of selected biomarkers in saliva with concentrations from other methods (ie: blood samples). Phase II technical proposals should include a detailed explanation of how the small business will obtain a monetary return on investment within two years of completion of Phase II (e.g. through sales, licensing agreements, venture capital, non-SBIR grants). The conclusion of the phase II project is the successful production of a biomarker saliva test system and testing using human saliva. Appropriate controls will be used and target accuracy is 90% to 95% accurate identification of weight gain biomarker(s) presences/absence. Potential commercial and clinical partners for Phase III and beyond should be identified. PHASE III: If successful, Phase II work will result in the development of a commercial saliva-based test for weight change metabolic biomarker(s). During Phase III, transition of the manufacturing processes to a large scale manufacturer may be needed as well as the development of optimized packaging, marketing and advertising to maximize commercial applicability. Additional experiments may be performed as necessary to prepare for FDA review as necessary. A plan for protection of intellectual property should be created and executed. A detailed market analysis will be conducted, an initial clinical application for the system will be selected, and a commercial partner selected. This new test will be available to military personnel and veterans who are diagnosed as obese and/or struggling with being overweight. Commercial application includes dieticians and health professionals around the world who could prescribe or recommend this monitoring tool for treating obesity and diseases impacted by being overweight.
OBJECTIVE: Define and develop existing, validated, pre-clinical biomarkers of organ-specific injury that correlate with diverse types of injury to include but not limited to systemic toxicity. Define and resolve issues involved with the use of a diverse set of biomarkers with a single multiplexed methodology. Define and resolve issues related to the isolation and use of diverse biological samples to include but not limited to plasma and urine. Develop a prototype multiplex biomarker assay and algorithm specific and sensitive to diverse and common types of organ injury to include but not limited to kidney, liver, heart, and lung. DESCRIPTION: Biomarkers are biometric measurements that provide critical quantitative information about the biological condition of the individual being tested. The literature is rich with pre-clinical toxicity biomarkers, particular hepato- and nephrotoxicity biomarkers, many of which are perhaps even useful for organ injury independent of systemic toxicity, such as ischemia. Of particular value are early, predictive, noninvasive biomarkers that have potential clinical transferability. Efforts between regulatory agencies and the pharmaceutical industry are underway for the coordinated discovery, qualification, verification and validation of early predictive toxicity biomarkers. Early predictive safety biomarkers are those that are detectable and quantifiable prior to the onset of irreversible tissue injury and which are associated with a mechanism of action relevant to a specific type of potential hepatic (Daniel J Antoine, 2013) or renal injury (Vishal S. Vaidya, 2008). Potential drug toxicity biomarkers are typically endogenous macromolecules present in diverse types of biological fluids, with varying immunoreactivity, which can present bioanalytical challenges. Further, a panel of biomarkers that are representative of diverse types of organ injury utilized in a single, multiplex bioassay present bioanalytical challenges. For example, assessment of liver injury may involve early markers of metabolic and excretory function. Proteins, small molecules, lipids, nucleic acids could all indicate the onset of early liver injury. In some cases, plasma proteins are best to indicate the presence of liver injury that arises from drug-induced liver injury, whereas in liver cholestasis a combination of protein and nucleic acid biomarkers may be more appropriate. In a similar manner, proximal tubule injury in acute kidney injury may be best indicated by a protein in urine rather than plasma. Thus, while a multitude of pre-clinical organ-specific candidate biomarkers are evident in the literature, a wide array of methodology and analytical methods would be required to assess all of them in order to screen for early, predictive indicators of organ injury. A single multiplex bioassay, sensitive and specific to a multitude of highly prioritized organ-specific biomarkers is required. The bioassay must be able to analyze a diverse set of macromolecules, originating from a diverse set of biological specimens, and analyzed in a single detection platform. PHASE I: Identify and define the number and physical properties of existing pre-clinical candidate biomarkers of highly prioritized organ-specific injury, particularly relevant to, but not limited to systemic toxicity of, but not limited to kidney, liver, heart, and lung. Identify and define all technological barriers related to development of a diverse set of biomarkers on a single multiplex bioassay with a single detection platform. Conceptualize and design an innovative solution to overcome technical challenges of measuring diverse types of macromolecules on a single multiplex bioassay with a single detection system. PHASE II: Using results from Phase I, fabricate and validate a prototype bioassay with the following requirements: 1. The bioassay is multiplex 2. The biomarkers originate from diverse biospecimens 3. The biomarkers are representative of multiple types of early organ injury 4. The bioassay requires a single detection system 5. Reasonable time to results: Threshold: 4h; Objective: 1h 6. The biomarker assay must have significant commercialization potential Key deliverables: 1. A technology that can predict early the onset of organ injury. 2. A technology that is capable of detecting and analyzing diverse types of macromolecules (protein, nucleic acid, small molecules, such as lipids or hormones). 3. A technology that can handle and process specimens from diverse types of biospecimens. 4. Ease of use; must not require extensive laboratory experience or training. 5. Process will include all steps from initial treatment of sample to readout of result. PHASE III DUAL USE APPLICATIONS: Commercialization and transition pathway identified for a maturing refined product. It should identify one or more Phase III military applications or acquisitions programs and likely path(s) for transition from research to operational capability. It should identify EITHER (a) one or more potential commercial applications OR (b) one or more commercial technology(ies) that could be potentially inserted into defense systems as a result of this SBIR project.
OBJECTIVE: The objective of this topic is to develop and demonstrate a wearable finger-mounted ultrasound transducer and ultrasound imaging platform that uses wireless connectivity for image display and operator interface functions on common commercially available hand held platforms. Medics in isolated environments are now conducting FAST exams in the field to determine internal injuries before casualties are transported out of the area and current wired handheld ultrasound transducer/probes are too large and bulky to be used in the combat environment. Medics need a finger-mounted probe to slide under body armor to examine casualties and have the capability to transmit ultrasound images wirelessly from a wearable finger probe to a SMART device which is connected to a secure communication network that can further transmit these images to a Medical Officer in the rear area. This research will incrementally advance the state of the art for point of injury care and on attended casualty evacuation vehicles such that the final demonstration shows proof-of-concept feasibility for medical information exchange and telementoring from any location on the battlefield. DESCRIPTION: This topic is designed to focus and address the current gap in availability to provide ultrasound capability to isolated combat environments. The FDA approved the first portable ultrasound-Smartphone device in January 2011. Medics are working with small combat teams in extreme isolated environments that casualty evacuations are not readily available. Finger-mounted ultrasound transducers have demonstrated ease of use over traditional hand-held transducers and offer the opportunity to simplify the use of ultrasound and allow wider adoption in military and point of care settings. However when coupled to traditional stationary ultrasound imaging platforms, operator freedom is impeded. A further barrier to wide adoption is the lack of training to interpret images. New portable electronic platforms offer portable display and connectivity features, which offer compelling advantages for ultrasound imaging and archival in the field. These"rear echelon"supporting elements could receive imagery in near real-time over a secure military radio network from remotely located Medics and Medical Officers can offer helpful guidance or interpret image acquisition in real time. A wearable probe and imaging platform with wireless connectivity (e.g. Ultra Wideband, Secure Bluetooth, Ultraviolet communications, etc.) for image display, image archiving and user interface functions would simplify the acquisition of ultrasound images and free the operator, enabling greater utility especially for: a) field combat/portable use and b) procedure driven point of care use including intra-operative use, vascular access procedures, biopsies etc. By leveraging commonly available third party display and data entry platforms in conjunction with the wearable finger probe (i.e. SMART devices, Tablets, etc.), a new level of portability, access and ease of use can be introduced to ultrasound imaging. After Phase III development, the final production model of the secure wireless capability must be ruggedized for shock, dust, sand, and water resistance to enable reliable, uninterrupted operation in combat vehicles on the move, to include operation and storage at extreme temperatures, and the developer"s kit must be able to install this capability on the devices. Size and weight are important factors. Quantitative values for acceptable operational and storage temperatures and power requirements should be planned to comply with applicable MIL-SPECs (available on line). PHASE I: Research solutions for technical challenges on this topic as identified above for a capability that incorporates feasible solution for a system with the specification for the ultrasound finger probe and wearable imaging platform with wireless connectivity. Develop a finger probe and demonstrate wireless connectivity (e.g. Ultra wideband, Secure Bluetooth, Ultraviolet communication, etc.) for imaging and archiving using a commercially available imaging platform. Flesh out commercialization plans that were developed in the Phase I proposal for elaboration or modification to be incorporated in the Phase II proposal. Explore commercialization potential with civilian emergency medical service systems development and manufacturing companies. Seek partnerships within government and private industry for transition and commercialization of the production version of the product. PHASE II: From the Phase I design; develop and demonstrate a functional prototype of the finger probe and wearable imaging platform with wireless connectivity for data display and user interface functions. Conduct regulatory and safety testing to support clinical use. Design and implement a field trial to validate the superiority of the finger probe/wearable platform versus conventional probe/ imaging platform in addressing field use of ultrasound. In addition to demonstrating secure wireless connectivity to mobile SMART Phones; the prototype device should demonstrate and integrate communications into the military tactical radio communications networks. Demonstrate the system with soldier medical attendants in a relevant environment; such as at a USA Army TRADOC Battle Lab. Flesh out commercialization plans contained in the Phase II proposal for elaboration or modification in Phase III. Firm up collaborative relationships and establish agreements with military and civilian end users to conduct proof-of-concept evaluations in Phase III. Begin to execute transition to Phase III commercialization potential in accordance with the Phase II commercialization plan. PHASE III: Refine and execute the commercialization plan included in the Phase II Proposal. Phase III will commercialize the finger probe and wearable imaging platform for end-user sale in both the military and private sector markets for commercially available devices. This effort includes, but is not limited to obtaining FDA and other regulatory clearances, manufacturing, clinical studies, product enhancements to support other clinical applications. Execute proof-of-concept evaluation in a suitable operational environment. Present the prototype project, as a candidate for fielding, to applicable Army, Navy/Marine Corps, Air Force, Coast Guard, Department of Defense, Program Managers for Combat Casualty Care systems along with government and civilian program managers for emergency, remote, and wilderness medicine within state and civilian health care organizations, and the Departments of Justice, Homeland Security, Interior, and Veteran"s Administration. Execute further commercialization and manufacturing through collaborative relationships with partners identified in Phase II.
OBJECTIVE: Develop and test a single head-mounted device capable of measuring vestibular function to include assessment of vestibular-ocular, vestibular-auricular, vestibular-perceptual and vestibular spinal reflexes. DESCRIPTION: Dizziness and vertigo are common in nearly all reported studies of mTBI and contribute disproportionately to disability (Terrio et al., 2009). The 2009 in-theater IRAQ study by Balaban and Hoffer found vestibular pathology in over 90% of the observed cases of acute mTBI and over 80% of the chronic mTBI cases. Incidence rates vary depending on the injury cause, site, criteria and whether the clinician's primary expertise is neurological or otolaryngolocal. There is a need to develop simple, easy-to-use, portable screening devices to assess military personnel in theater to determine whether the patient needs to be evacuated for higher levels of care. In the United States, mTBI accounts for approximately 90% of the new cases of medically diagnosed head injuries each year and is associated with headache, dizziness, vertigo, disequilibrium, or disorientation, often in the absence of abnormal brain imaging results. (Gottshall et al., 2003). Recent advances in several technology areas including MEMS accelerometers and miniature high resolution cameras make feasible the development of a head-mounted assessment device that will permit objective measures of vestibular function. Traditional stimuli for vestibular reflex responses required acceleration of the head or body. Increased understanding of vestibular reflexes have led to new tests of vestibular function such as the cervical vestibular evoked myogenic potential (cVEMP) and the ocular vestibular evoked myogenic potential (oVEMP). The stimuli for these reflexes are loud clicks (CVEMP) or a tap on the forehead (oVEMP) which preferentially test the saccular or utricle function of the vestibular otolith organs. The combination of technology and understanding of vestibular function will permit a single well-designed, head-mounted display to perform the assessment of several clinically important tests including: Vestibular Ocular Reflex (VOR); head thrust test of otolith function; electronystagmography; Dynamic Visual Acuity Test (DVAT); dynamic body balance and Subjective Visual Vertical (SVV). This project seeks the design and development of a device that will objectively assess as many aspects of static and dynamic vestibular function as is currently capable with available miniaturized technologies. This device will benefit military medics assessing vestibular dysfunction following concussive events as well as civilian clinicians diagnosing patients with balance disorders. PHASE I: Integrate a combination of vestibular tests to assist clinicians in the assessment of vestibular function. At a minimum the tests will include SVV, VOR, DVAT, head thrust, ocular counteroll and balance measures. Identify the best technologies to provide vestibular stimuli for the selected combination of tests as well as the optimal sensors to measure the reflex and perceptual responses. Develop and demonstrate a prototype capable of both providing stimuli as well as measuring responses. PHASE II: Refine the prototype developed in Phase I to demonstrate and clinically validate the capabilities in health care settings. Using feedback from operators testing in the clinical environment, provide technician-friendly interfaces for data collection and analysis. PHASE III DUAL USE APPLICATIONS: In conjunction with health care professionals transition the new device into clinical use both in the military and civilian sectors
OBJECTIVE: Develop a mobile, web-based application that assists/guides patients with hearing loss and tinnitus through aural rehabilitation therapy (improving signal identification and speech in noise function) and provides tinnitus management. The program will identify best practice applications for servicemen struggling to habituate to the effects of hearing loss and tinnitus. Possible solutions are to incorporate components of cognitive-behavioral therapy (CBT), tinnitus masking (TM), tinnitus retraining therapy (TRT), neuromodulation (NM) along with introducing aural rehabilitation therapy (ART) (J. A. Henry, Schechter, et al., 2006a, 2006b). The tool will identify users with ear-level devices (hearing aids, noise generators, cranial nerve stimulators, and combination instruments) and accommodate and improve effective use of such devices. The tool(s) applications will be compatible with networks and telemedicine data flows within the DOD/VA community to protect information security by preventing the exchange or transmission of personally identifiable information. DESCRIPTION: Due to excessive noise exposure and noise induced trauma servicemen are often plagued with the effects of hearing loss and/or tinnitus. Since 2005 the VA Annual Benefits Report reveals that tinnitus is the most common individual service-connected disability in veterans, and hearing loss is the second most prevalent (AVBR 2011). As of the most current report (aVBR 2011), there were 840,864 veterans who had been awarded a service connection for their tinnitus, and 148,345 have been added in 2012. Hearing loss measured 701,760 service connected members in 2011, and an additional 90,427 have been documented in 2012. Hearing loss is routinely rehabilitated with hearing aids, but aural rehabilitation to accommodate to the hearing prosthesis and optimize function is not standard practice. There is no current standard treatment for tinnitus or aural rehabilitation therapy available to these patients. Further, the Agency for Healthcare Research and Quality published a comprehensive analysis of the comparative effectiveness of current treatments for tinnitus and found that of 9,725 journal citations, only 52 were found to be worthy of comparative review and the strength of evidence of effective treatment for tinnitus was low in all cases (AHRQ CER Review 122). Additionally, there is limited access to care in remote settings or while deployed in theater for military members where many of these injuries are sustained by through noise trauma. There is a growing consensus that the use sound therapy in the either the TRT, or Progressive Tinnitus Management program (PTM-VA) alone or combined with other novel and potentially synergistic emerging technologies such as ART and NM may be the optimal approach. However, neither of the currently utilized systems (TRT, PTM-VA) are specific to military or former military populations. This system proposes a military specific aural rehabilitation therapy (ART), with military unique content supporting auditory processing to be augmented with the use of appropriate synergistic devices. Following the identification of the most appropriate device/strategy or combination thereof, the web-based system will utilize military noise (tank, gunfire, ship noise, background commands etc.) along with branch specific military verbal commands to help the hearing impaired, or tinnitus suffering patient develop auditory processing strategies targeted to improve speech in noise listening abilities and sound identification exercises. PHASE I: Identify/develop synergistic components of best tinnitus management strategies and combine into a web-based program that includes military specific aural rehabilitation therapy as outlined. Phase I includes identification and recording of the most common military background noises and pairing them with military unique verbal commands. The program should be developed with aspects of speech in noise testing, listening exercises, possibly in a gaming scenario. The noise and commands should be military specific, realistic, and relevant. The signal to noise ratio should change as patient auditory processing improves. PHASE II: Phase II will focus on finalizing and validating the successfulness of the program. Commonly acceptable, already standardized speech in noise tests conducted before and after program use will determine the successfulness of the ART system. Effective maneuvering and obtaining program objectives can serve as secondary measures of effectiveness. PHASE III: Phase III will focus on integrating the mobile device into the DOD/VA medical treatment facilities, both hardened and expeditionary. The application should be available for wide dissemination for clinical use through web accessibility.
OBJECTIVE: The objective of this effort is to develop a new innovative technology that may include the use of novel biomaterials, nanotopologies, cellular/tissue-based strategies or biologics, to reconstruct and regenerate vascular tissue in the extremities after traumatic injury. DESCRIPTION: Blood vessel trauma leading to hemorrhage or ischemia is a significant cause of morbidity and mortality to wounded warriors. Methods of reconstruction are now applied to approximately half of these cases whose rate has risen by 5 times in modern combat. Selective ligation and interposition bypass grafting with an autologous vein remain the standard treatments for segmental blood vessel loss. However, the complexity and extensiveness of soft tissue trauma involving multiple extremities often leave vascular surgeons limited sites to harvest autologous grafts. Hence there is a need to seek alternative methods for addressing the issue of vascular reconstruction and revascularization. As the therapeutic field continues to advance, it is likely more nanotechnology, cellular, mechanical, biologic or pharmacological components may also be incorporated into therapeutic strategies to promote or direct revascularization and repair segmental vascular defects. PHASE I: Conceptualize and design an innovative solution for addressing vascular injury that will enable the reconstruction and promote regeneration of vascular tissue in the extremities where autologous graft harvest is not an option. Such strategies may include the use of biomaterials, nanotopologies, cellular, tissue, pharmacological or biological components. It is likely that the most successful therapeutic approaches would incorporate two or more of the described components. The solution should be able to address vascular defects of various lengths, diameters and complexity. The solution can be a permanent or temporary repair strategy that may address immediate limb salvage or long-term regeneration. The required Phase I deliverables will include: 1) a research design for the proposed therapeutic strategy and 2) A preliminary prototype with limited testing to demonstrate in vitro proof-of-concept evidence that demonstrate durability and efficacy of the technology (to be executed at Phase I). Other supportive data from in vivo proof-of-feasibility studies demonstrating revascularization which lead to functional improvement and attenuate tissue loss may also be provided during this Phase I effort. PHASE II: The researcher shall design, develop, test, finalize and validate the practical implementation of the prototype therapeutic that implements the Phase I methodology to reconstruct or regenerate vascular tissue over this 2-year effort. The researcher shall also describe in detail the transition plan for the Phase III effort. PHASE III: Plans on the commercialization/technology transition and regulatory pathway should be executed here and lead to FDA clearance/approval. They include: 1) identifying a relevant patient population for clinical testing to evaluate safety and efficacy and 2) GMP manufacturing sufficient materials for evaluation. The small business should also provide a strategy to secure additional funding from non-SBIR government sources and /or the private sector to support these efforts. Military application: The desired therapy will allow military practitioners to apply the therapy. Commercial application: Healthcare professionals world-wide could utilize this product as a therapy meant to improve the standard of care presently available to patients suffering from vascular trauma.
OBJECTIVE: To develop a rehabilitation and assistive technology that enhances and/or returns upper limb motor function losses due to traumatic combat injuries. Develop a portable and easy to use hand worn assistive device that is applicable in daily life and outdoor activities. The device should have biomimetic motion application and structural similarity to biological hand. The device should also be safe to use, relatively light weight, affordable, scalable, and have low power consumption and a wireless capability for data transfer. DESCRIPTION: Musculoskeletal injury (MSI) is the leading cause of health problems for the military. It can be caused by traumatic combat injuries and physically straining risk factors such as military training, repeated combat deployment, carrying heavy load, standing for extended periods of time, walking long distance and participating in sport.  Vascular limb injuries are now among the main MSI injuries that cause severe bleeding, ischemia, amputation, and death. Improved torso protection for soldiers resulted in new injury patterns. Almost 46% of all combat vascular injuries affect the lower limbs, and almost 25% affect the upper limbs.  19% percent of soldiers returning from combat after completing deployment required an orthopedic surgical consultation and 4% of soldiers required surgery.  These statistics highlight great need for improved orthotic rehabilitation and assistive devices to help restore limb functions. The device is intended to be used both as an assistive and rehabilitation mechanism. It should be able to easily transition between assistive device and rehab tool if the patient"s injury allows for recovery. As an assistive device it is worn by patients for extended period of time to accomplish day to day activities where the use of two hands is required. It is also important that the device be lightweight and safe to be worn in close contact with human skin tissue. The intent of the rehabilitation device is to enhance recovery until adequate muscle function is restored. The design should be scalable to fit different hand sizes and have wireless capability to monitor patient"s recovery progress by an occupational therapist or other clinician. The device can then be abandoned after a point of recovery and limb function is restored.  Challenges experienced with upper limb impairments include muscular weakness, pain, sensory loss and decreased grip, pinch, strength, finger tactile sensations, and cognition functions. The hand particularly presents a unique area of challenge in this research endeavor because of its complexity: many degrees of freedom and large numbers of tactile sensors in a relatively small area. Presently available devices in the market are heavy, bulky, noisy and esthetically not pleasing.  Although current treatments have shown to be successful in assisting and returning patients"motor functions to a pseudo-normal level, they involve a lot of time and resources from therapists. These current limitations present an increased need and large market demand for improved hand worn devices. The last several years have resulted in actuator technology breakthroughs which open new opportunities to advance research efforts on assistive and rehabilitation devices. Future research efforts should address major topic areas such as actuator types, power transmission, degrees of freedom (DOF), intention sensing, and control methodologies that behaves like and exploit biological hand skeletal systems. A device that can be used to assist and rehabilitate patients without a great demand for time, labor, and resources would have a high demand in the orthotics industry. Requirements for improved hand worn device: Safety - Device comes in contact with the wearer; any malfunctions can be seriously harmful to the user. Design - Mechanical designs should consider the possibility of unpredicted erroneous operation of the device controller while the device is actively actuated Size - Design should be scalable to fit different hand sizes and easily reproduced Lightweight - The technology is intended to assist muscle weakness and function loss, so it should not present additional barriers to keep the device on for extended periods of time. Multiuse- This device should be intended for a wide range of patients with hand weakness whether they need permanent assistance or require short term muscle rehabilitation. Biomimetic - Should resemble the human biological muscle function and structure as closely as possible Low power consumption-Actuated device should be energy efficient and wearable for extended period of time without having to plug/recharge frequently Wireless technologies - Should have wireless capability to transmit data of limb recovery progress to computing devices and physicians for real-time results Commercialization - The final device should be available at affordable price for the general public. PHASE I: This Phase is a feasibility study that should demonstrate or determine the scientific, technical, and commercial merit and feasibility of a selected concept. Identify and define shortcomings with the current state of technology. Establish performance goals. Design/develop an innovative concept along with a limited testing of materials. Design a working proof of concept and key technological components. Determine technical feasibility of designed approach. PHASE II: Finalize the design from Phase I and complete component design, fabrication and laboratory characterization experiments. Phase II should produce prototype hardware, construct and demonstrate the operation of the prototype based on phase I established performance goals. The device should be rigorously tested and further developed to demonstrate capability. Provide a detailed plan for hardware/software integration and fabrication procedures. Show a clear path to commercialization. PHASE III DUAL USE APPLICATIONS: The final product of this phase is a hand worn rehabilitation and assistive device for musculoskeletal traumatic injuries sustained during or post military deployment. Phase III should demonstrate complete readiness of technology usability and fabrication. It should demonstrate a path towards commercialization at an affordable price to the general public. It should develop a detailed procedure for use, maintenance, and recalibration of the device.
OBJECTIVE: The objective of this effort is to develop a new tool or technology that can optimize training outcomes for myoelectric prostheses. DESCRIPTION: Myoelectric prostheses monitor electrical signals generated by a patient"s muscle contractions and use those signals to control prosthetic joint movements. Successful use of myoelectric prostheses is dependent on providing patients with high-quality, individualized pre-prosthetic training that becomes more important with increasingly complex injuries and/or use of more advanced prostheses . A number of different training paradigms have been studied since the advent of myoelectric prostheses [2-5]; however, few of those paradigms have transitioned into commercially available training materials. Additionally, the limited number of commercially available myoelectric training hardware and software packages that do exist are often simplistic, manufacturer-specific, expensive, and not motivating. While evidence is available supporting that patients can learn with a myoelectric training system, little to no literature exists showing that training actually helps improve patients"clinical outcomes . Therefore, a need exists for a myoelectric prosthesis training tool or technology that can improve patient clinical outcomes, motivate patients to train, and be used in the clinic and remotely by the patient. The metrics we would use for this effort are not how the program works (that would be something the developer would establish), but how it impacts the upper extremity prosthesis use. So, the primary metric would be compliance - does the amputee continue to use the advanced prosthesis or do they just stop using it and put the prosthesis in the back of the closet. If the amputee can"t control the advanced prosthesis because they didn"t receive adequate training to fully prepare them to utilize the prosthesis, they will stop using it. A second metric would be what daily or work tasks an amputee is able to perform or accomplish with the myoelectrically controlled prosthesis that they couldn't with the simple hook-prosthesis. The myoelectric controls utilize muscles to control the advanced prosthesis that would not normally be used by a non-amputee to move the fingers, hand, wrist, etc. This requires tremendous effort to retrain both the muscles and the person's thought process on how to use the prosthesis to accomplish tasks as simple as picking up a glass. More intricate work such as buttoning a shirt would require more intense training because this task involves coordinated finger work rather than simply opening and closing a hand. The third metric would be quality of life measurements; and a fourth metric would be social reintegration - that is, did the amputee, as a result of compliance and increased ability to control and work the advanced prosthesis (because of the advanced training), successfully return to a job or the level of function they desired. Additionally, the training tool should also be able to adapt to conventional and state-of-the-art control schemes, accommodate different amputation levels, and apply to patients with and without targeted muscle reinnervation (TMR). PHASE I: Conceptualize and design an innovative manufacturer-agnostic training solution for myoelectric prostheses leading to a commercialized product that can quantifiably improve patient outcomes. Such strategies may include hardware, software, or a combination of the two. The solution should be affordable, portable, and reliable such that training may begin in a clinical environment under the supervision of a clinician and the training may be continued at home or other remote locations. The required Phase I deliverables will include: 1) a research design for the proposed therapeutic strategy and 2) A preliminary prototype with limited testing to demonstrate proof-of-concept evidence of durability and efficacy of the technology (to be executed at Phase I). PHASE II: The researcher shall design, develop, test, finalize and validate the practical implementation of the prototype therapeutic that implements the Phase I methodology to optimize myoelectric prosthesis training outcomes over this Phase II effort. This effort should include human subject trials comparing the new training device with current standard of care myoelectric prosthesis training. The researcher shall also describe in detail the transition plan for the Phase III effort. PHASE III: Plans on the commercialization/technology transition and regulatory pathway should be executed here and lead to FDA clearance/approval. They include: 1) identifying a relevant patient population for clinical testing to evaluate safety and efficacy and 2) GMP manufacturing sufficient materials for evaluation. The small business should also provide a strategy to secure additional funding from non-SBIR government sources and /or the private sector to support these efforts. Military application: The desired therapy will allow military practitioners to apply the therapy. Commercial application: Healthcare professionals world-wide could utilize this product as a therapy meant to improve the standard of care presently available to patients suffering from amputation.