Voice Controlled Office Assistant Using Abductive Networks ($19,968 Phase I option proposed)
Small Business Information
508 Dale Avenue, Cahrlottesville, VA, 22903
Keith C. Drake
AbstractAbTech will survey the hardware and software available to determine what capabilities should be included in an automated office assistant software suite. A language parsing and encoding system in ANN (a leading semantic understanding software package) will be used to parse and encode ASCII output resulting from a voice recognition system such as DragonDictate. The ASCII output from the voice recognition system will go to the ANN language encoding facility that will parse the sentence and assign deep meaning to the words as a function of context. The encoded strings corresponding to the spoken words will go to information processing modules development with AIM, which will have been trained on variable constructs permissible in English so that the encoded word meaning strings can be used to select an appropriate response to the user. The NASA CLIPS expert system will be used to create expert systems capable of activating the command sequences needed for the various packages to accomplish the result desired by the user or to activate and display the function requested by the user. ANN will determine from parsing the sentence and asking any required clarifying questions the universe of context and will then encode the words. The codes would be used to train AIM networks to select the correct response. Training sets would be prepared for each category we determined was going to be included in the automated office suite of capabilities. After training these or equivalent statements would cause the AIM models to select the appropriate application package activation sequence. Because of the robustness of the AIM network models, statements that had not been part of the training package but were similar would also produce a desired response.
* information listed above is at the time of submission.