‘In this project, we developed an Android app called GuideMe. It connects two groups of users: the visually impaired people (users) and the volunteer helpers (helpers). The users need help in a variety of situations, such as when they walk on an unfamiliar street or when they look for something in the room, in which case a helper could provide guidance. A user can use the app to find/choose a helper among several available ones, then starts a video talk with him/her. As the talk going on, the helper can observe the visual surroundings of the user, and give suggestions/serve as user’s eyes.
With nowadays smart phone, taking photos is easy. In the meantime, it also increases the demand to manipulate the photos across device. With the availability of free cloud service, it may help to resolve this issue. Therefore, there is a need to manage photos between device storage and cloud storage. In this project, an android application is developed to allow user to sync their photos using free cloud service - google drive cloud. Both sync up and sync down features are provided in the app.
YouDescribe’ is a project founded by the Smith-Kettlewell Eye Research Institute’s (SKERI) Video Description Research and Development Center (VDRDC) with an aim to enable sighted volunteers to record audio descriptions for online videos for the benefit of users with low visual perception. It helps such users understand and experience visual content in online videos through audio descriptions played back along with the video. VDRDC’s current platform includes a server, a database and a web interface to facilitate this.
Radio-Frequency Identification (RFID) systems nowadays have been widely used in various fields such as manufacturing, transportation, health care, etc. In these fields, real-time information collection is usually desirable or even a must, but highly challenging with current RFID systems, because the hardware of RFID tags is too simple to support sophisticated operations. Most of existing information collection protocols are developed for single-reader RFID systems where the whole area is covered by one reader.
We present an end-to-end system for open-domain non-factoid question-answering that consists of three components. (1) The query formulation module is tasked with transforming the verbose, and often non-grammatical and noisy question into a boolean query of few keywords. The generated query is then run through a commercial search engine to obtain matching documents from the Web. (2) The candidate answer generation module extracts potential answers from the retrieved documents. (3) The answer selection module is responsible for identifying the best answer based on various criteria.