- Advanced Biometric Techniques
- Algorithms and Software for Problems in Radiosurgery, Radiation Therapy, and Other Medical Applications
- Anopheles Mosquito Comparative Genomics
- Automatic Emotion Detection
- Compucell: Computational Methods for Simulation of Biological Development
- DARTS - Design and Analysis of Real-Time Systems
- Data Intensive Abstractions for High End Biometric Applications
- Debugging Grids with Machine Learning Techniques
- Designing Ultra-Dense Computers with QCAs
- Documenting Endangered Languages
- Dynamic Data-Driven Applications Simulation (WIPER)
- Environmental Simulation (NOM)
- ExPERTS - Energy/Power Efficient, Real-Time system Scheduling
- From Computational Discovery to Privacy Preservation in Social, Product, and Health Networks
- Graph Grammars for Semantics
- Intelligent Edge Devices
- Judicious Resource Management
- Languages and Systems for Data Intensive Scientific Compuing
- Morph: Morphable Computer Architectures for Highly Energy-Aware Systems
- Neural Networks for Machine Translation
- PIM: Processing in Memory
- Protomol: Computational Methods for Simulation of Proteins
- SPIRIT: Spontaneous Information and Resource Sharing
- Scalable Bioinformatics
- Secure and Reliable Computation Outsourcing
- Sensor Networks
- Software Engineering of Scientific Software
- Study of the Open Source Software Phenomenon
- TeamTrak: Collaborative Mobile Computing
- Tracking the Wandering Mind
- Unsupervised Multilingual Language Learning
This project is investigating various biometric sources (face, iris, hand, fingerprint, and gait) and sensors (2D, 3D, infra-red, ...) with the goal of developing more accurate biometric techniques. Our research group is supporting the government programs on the Gait Challenge Problem, the Face Recognition Grand Challenge, and the Iris Challenge Evaluation.
Faculty: Bowyer, Flynn
Algorithms and Software for Problems in Radiosurgery, Radiation Therapy, and Other Medical Applications
This project is on the design, analysis, implementation, and experimentation of new algorithms and software for solving geometric optimization problems arising in radiosurgery, radiation therapy, and other related medical applications. A key step in radiotherapy and radiosurgery is to develop a treatment plan that defines the best radiation beam arrangements and time settings to destroy the target tumor without harming the surrounding healthy tissues. At the core of the planning process is a set of substantially challenging geometric optimization problems. We have been investigating a number of such geometric optimization problems, such as beam selection, beam shaping, surgical navigation and routing, sphere packing, shape approximation, leaf sequencing, field covering and partitioning, image segmentation, and beam source path planning. This is a joint project with the Department of Radiation Oncology, University of Maryland School of Medicine. Our goal is to incorporate our new algorithms and software into clinical radiation treatment planning systems for treating cancer patients.
In September of 2008, the National Human Genome Research Institute (NHGRI) and the National Institute of Allergy and Infectious Diseases (NIAID) of the U.S. National Institutes of Health approved funding for the sequencing of the genomes and transcriptomes of thirteen Anopheles species, as described in the white paper. Since the initial white paper was approved, two additional species have been added to the project. This project, coordinated by ND biologist Nora Besansky, was inspired by very ambitious goals: improved understanding of vectorial capacity, and the application of that understanding toward reducing malaria disease burden. The Notre Dame Bioinformatics Lab, in collaboration with Prof. Besansky, will help uncover the evolutionary genomics of the An. gambiae species group and the larger set of species in relation to malaria.
The goal of this project is to develop computer systems that automatically sense when a user is bored, confused, frustrated, etc., by monitoring facial features, speech contours, body movements, interaction patterns, and physiological responses. The emotion detection systems will be integrated into computer interfaces in an attempt to provide more effective, user-friendly, and naturalistic interactions. Several projects focusing on either adapting existing emotion detection systems or investigating new modalities for emotion detection are available.
We are creating a model that includes how genetics at the subcellular level interacts with biophysics at the cellular level to orchestrate the development of organisms. This model is implemented in a software package called CompuCell. Users define the model to simulate using BioLogo, a domain specific language that generates a simulation package for the desired model. We also work with biologists, physicists, and mathematicians in the development and validation of simulations of chicken limb development as part of a National Science Foundation Biocomplexity project.
Real-time embedded systems can be found in many applications such as communication devices, transportation machines, entertainment appliances, and medical instruments. This research targets at two important problems in real-time system design: performance analysis and scheduling algorithm design. Our current focus is on dealing with uncertainty and flexibility presented in many real-time control applications.
Research in biometrics depends upon the effective management and processing of many terabytes of digital data. Because these workloads are so data intensive, they are very challenging to scale up to large clusters and grids. To address this, we are designing a data repository and web-enabled tools that simplify browsing and processing large data sets. Our work has produced some of the largest data analysis results to date in the field, reducing the execution time of some problems from years into days.
Faculty: Flynn, Thain
Debugging large computing grids is notoriously hard. What can an end user do when a workload of millions of jobs experiences thousands of failures? We propose that data mining techniques are an effective way of explaining what happens to large workloads in grids. We are building and deploying tools that explain failures in computing grids of thousands of processors.
Faculty: Chawla, Thain
Problem: Most projections of CMOS technologies perceive an ultimate limit of about 0.05 micron feature sized devices in about 10 years." The QCA Solution: Utilize a new technology termed Quantum Cellular Automata (QCA) to build real computers orders of magnitude denser than the limits of CMOS from molecularly sized devices where information is moved by Coulombic interactions rather than current flow.
Technologies for large-scale data collection and automatic transcription and word alignment in endangered and unwritten languages. Sponsored by the National Science Foundation.
This project is developing an integrated Wireless Phone Based Emergency Response System (WIPER) that is capable of real-time monitoring of normal social and geographical communication and activity patterns of millions of wireless phone users, recognizing unusual human agglomerations, potential emergencies and traffic jams. WIPER will select from these massive data streams high-resolution information in the physical vicinity of a communication or traffic anomaly, and dynamically inject it into an agent-based simulation system to classify and predict the unfolding of the emergency in real time. The agent-based simulation system will dynamically steer local data collection in the vicinity of the anomaly. Multiple distributed data collection, monitoring, analysis, simulation and decision support modules will be integrated using a Service Oriented Architecture (SOA) to generate traffic forecasts and emergency alerts for engineering, public safety and emergency response personnel.
This project consists of an interdisciplinary team of environmental (biology, chemistry, geology) and IT scientists that is developing a stochastic model for the time-dependent evolution of NOM in the environment. The scientific objectives are to produce both a new methodology and a specific program for predicting the properties of NOM over time as it evolves from precursor molecules to eventual mineralization. The methodology being developed is a mechanistic, stochastic simulation of NOM transformations, including biological and non-biological reactions, as well as adsorption, aggregation and physical transport. It employs recent advances in agent-based simulation, web-based deployment of scientific applications, a collaboratory for sharing simulations and data, and scalable web-based database management systems to improve the reliability of the stochastic simulations and to facilitate analysis of the resulting large datasets using datamining techniques.
A collaborative research that aims at developing scheduling algorithms to minimize the energy/power consumption of real-time embedded systems. Some techniques being considered include Dynamic Voltage Scaling (DVS), Dynamic Frequency Scaling (DFS), Sleep Mode Control (SMC), etc.
Faculty: Chen, Hu
The fast emergence of interaction networks brings a rich source of information not only about the nodes in such networks, but also about the relationships between them. This notion of collective intelligence opens up a new tremendous source of data which can vastly improve decision making in many domains, be it advertising, the effectiveness of offering related products, or the ability to diagnose medical conditions. Such networks, however, carry enough information about the entities participating in such networks to compromise their privacy. Now a node can expose not only its own information, but information about its neighborhood as well. The goal of this project is to extract rich information available in product, social, and medical networks while preserving the privacy of their users. We allow users to have control over what information about them can be released and secure the data through cryptographic means.
Faculty: Blanton, Chawla
Theory and implementation of grammar formalisms for describing graphs for natural language semantics. Based on the 2014 JHU Workshop on Meaning Representations in Language and Speech Processing.
In this research, we design and implement prototypes of intelligent network edge devices such as routers, modems, or wireless base stations. The goal is to build sophisticated network and system management stations which exploit their location at the edge of a network and their ability to communicate directly with their end devices to provide efficient and centralized resource and network management functionalities.
Faculty: Chawla, Poellabauer, Striegel
This project studies the complex relationships between resources and the effect resource adaptation has on application performance. This work is driven by the insight that careless (non-cooperative) adaptation of multiple resource or multiple communicating devices can lead to sub-optimal savings in resource utilization or degraded application performance; effects which are often difficult to capture with theoretical models alone, thereby requiring extensive experimental studies.
Many problems in science and engineering can only be solved by harnessing large collections of computers called cluster, clouds, or grids. Unfortunately, these systems are very challenging to use, particularly for data intensive applications. To address this, our lab is designing new languages and systems that allow end users to easily specify and execute workloads that run on hundreds of processors.
Models and algorithms for translation and language modeling using neural networks.
With the current trend of rapidly increasing CPU speeds and ballooning RAM capacities, the bottleneck between the processor and main memory is becoming more and more costly. PIM is an attempt to solve this problem by combining processor and memory macros on a single chip. The benefits of such an architectural shift include very high bandwidth and multi-processor scaling capabilities. These possibilities and more are being enthusiastically explored by our group.
Faculty: Brockman, Kogge
We are developing multiscale methods for simulation of proteins and other biological molecules. These methods are useful for understanding important post-human-genome biological questions, such as the folding pathways of proteins or the relationship between structure and function. Our goal is to provide algorithms that scale with system size and simulation length. We also provide efficient and extendible implementations of these algorithms using a high performance object-oriented framework called ProtoMol. This project is funded by the National Science Foundation.
The objective of this project is to overcome the limitations of mobile wireless devices by allowing them to spontaneously request access to resources (such as storage, CPUs, network bandwidths) and information (e.g., obtained from sensors) residing on wireless peers in their proximity. The project addresses the resource-efficient and reliable discovery and access of such resources and information.
We are creating a series of cloud-enabled bioinformatics tools through Biocompute, which is a web-based tool that leverages grid-computing resources for solving large bioinformatics problems. Biocompute divides large bioinformatics jobs into hundreds of smaller jobs that are sent to either a large collection of personal computers or an external cloud. This drastically reduces the time required to complete desired tasks. Further, each Biocompute user has a personal workspace where he or she can store and share files and results with others. Users can create custom databases to run jobs on or they can use public databases. Biocompute is maintained by the Cooperative Computing Lab and is supported by the Bioinformatics Core Facility at the University of Notre Dame.
Faculty: Emrich, Thain
This project allows resource-constrained devices (limited in their battery life or computational capabilities) to use external computing resources such as supercomputers or grids to carry out their extensive computational tasks in a secure and reliable way. Secure computation means that powerful helper machines do not learn any information about the data being processed yet help a weak device to compute its task; and reliable computation means that the device is able to verify that the computation was indeed performed correctly without the need to recompute the task itself. We develop cryptographic techniques to securely outsource different types of computation such as biometric and DNA comparisons and others. We also use the fact that helper servers do not obtain access to the data they are processing to allow the device to detect deviations from the prescribed computation more effectively.
The objective of this project is to develop techniques for the flexible and efficient collaboration among sensors in a wireless sensor network, including techniques for energy-efficient real-time routing of sensor data or resource-efficient in-network data aggregation and fusion.
Faculty: Chawla, Poellabauer
We try to develop tools that simplify the design and implementation of high-performance software, utilizing object-oriented and generic programming in languages such as C++ and Eiffel. Examples are tools to automatically generate tests from semi-formal specifications of programs that are part of the code itself. We use Eiffel and design by contract. We also are trying to build self-adaptive programs that can choose the best algorithms and parameters for particular problems at run time.
This research project seeks to understand the free/open source software (F/OSS) phenomenon and to predict the pattern of growth exhibited by F/OSS projects over time. The F/OSS community is a genuine behavioral and technical puzzle, one with significant, far-reaching impact on the world's economy. The F/OSS community has developed a substantial amount of the infrastructure of the Internet, and has several outstanding technical achievements, including the most popular web server (Apache), the most popular scripting language (PERL), and an operating system which successfully competes with Windows (Linux). These programs were written, developed, and debugged largely by part time contributors, who in most cases were not paid for their work, and without the benefit of any traditional project management techniques. We are developing a conceptual model to explain the motivations and key work processes underlying this extraordinary phenomenon. Our preliminary analyses indicate that the F/OSS community can be usefully modeled as a social network, one that has the characteristics of a self-organizing, emergent system. Drawing on the social psychological theories of motivation, self-managing teams, and communication, we are creating a model of the social and task characteristics that predict the emergent properties of the system.
Mobile teams such as first responders and military units require the ability to communicate in sensor-rich environments using short-range links without the benefit of a communications infrastructure. We are constructing a mobile handheld system that allows for such teams to navigate and collaborate by forming ad-hoc wireless networks. Users carry handheld devices that report their location and situation to nearby peers, allowing participants as well as commanders a wide-area view of a situation. We are exploring the interaction of technology and sociology in these unreliable but data-rich environments.
Faculty: Chawla, Pollebauer, Thain
Mind wandering and zoning out are common phenomena that people experience while performing effortful cognitive activities. For example, it is estimated that people unintentionally mind wander approximately 1/3 of the time when reading text. Can we automatically identify the instant when a person starts to mind wander? To investigate this question, we are developing systems that diagnose when a person is mind wandering during reading by analyzing the dynamic integration of textual features with eye gaze patterns. A number of short and long-term projects that focus on eye tracking and other methods to detect mind wandering are available.
Models and algorithms for translation, word alignment, and bilingual lexicon induction from parallel and non-parallel texts. Sponsored by DARPA LORELEI and a Google Faculty Research award.
VectorBase is a Bioinformatics Resource Center designed to store and augment genomes of all arthropod vectors of human pathogens that represent either potential agents of bioterrorism or that transmit important emerging infectious diseases. VectorBase currently contains the genomes and associated information for three mosquito species, the tick and the body louse. Twenty other organisms will soon be included, with hundreds more in the pipeline over the next 5 years of funding. VectorBase is a well-known bioinformatics resource in the arthropod research community and also interacts closely with a number of other bioinformatics efforts, most notably with EBI, GMOD, GO, RefSeq, UCSC and the other NIAID-supported BRCs. Further, VectorBase will be responsible for providing bioinformatics support of four included "Driving biological projects," which will generate cutting-edge experimental data for both graduate students and staff to contribute.
Faculty: Collins, Emrich, Madey