This may sound fake or even hypocritical - but I really do have a dream to utilize the great power of computer science and data to improve humanity’s wellbeing. Our bodies are just so fragile and weak. They refuse to survive traffic accidents, deteriorate after 12-hour continuous work, crash for long-term unhealthy diets. It took me only one year to develop myopia, yet two years to reach a diagnostic confirmation on IBS, and perhaps forever to accept that one of my beloved families’ life is permanently disrupted by breast cancer. Maybe this is a naive thought - but I think we should aid and care our weakest mortal corpses with the strongest power: computer science.

Visual Correction Display

In this research, we worked on improving the hardware and software aspects of Professor Barsky’s approach of producing a corrective visual display for those with visual aberrations (myopia, hyperopia and so on). In contrast to asking the patients to wear corrective lens, our algorithm “distorts” the image and texts rendered on electronic screens so that people with conditions can see it sharply.

As an freshman undergraduate in the team, I was very fortunate to be selected into this team that is made up of master students. Over this year of research, I worked with the parallelization team in parallelizing the VCD algorithms to improve the performance on the algorithms. I also developed an Android app by porting native C++ codebase to Java which allows algorithm benchmarking on lower-end devices.

Auditory Attention Detection

In this research, we have quantified a major drawback of the effectiveness of the currently existing auditory attention decoding algorithms using a data science approach in MATLAB. This research is done in Institute of Infocomm Research, affliated to Agency of

Auditory attention decoding has important application in cochlear implants. People who must rely on cochlear implants have difficulties differentiate one voices from another in an environment of competing speakers, as the implants cannot do what normal humans can do with ease - pick up and only attend to one voice. Our investigated approach seeks to decode which voice the subjects are trying to attend to by collecting and analyzing their Electroencephalogram data (it is a kind of brainwave that can be collected non-intrusively). If that is successful, we can build in this functionality into cochlear implants to facilitate better assistive effects.

Here is the abstract of our paper.

Abstract: Rationale. Brain signals measured using Electroencephalogram (EEG) vary with the speech being attended to. This paper investigates the feasibility of using EEG-based Brain-Computer Interface (BCI) to detect attended speech. Approach. EEG data recorded from subjects listening to two different sets of speeches were collected: “ideal” speeches involved different-gender fiction narration without background noises and shifts in volume, speed and tonality, and “realistic” speeches from lecture recordings with background noise and occasional music. The EEG data collected were then pre-processed by temporal filtering into low frequency bands (2-15 Hz) and spatial filtering with Common Average Reference. The speech data presented to the subjects were also pre-processed by extracting Hilbert envelope from each sub-band. These data were divided into 30s trials and used to train intra-subject classifiers to detect the attended speech using EEG data. Results. Leave-one-out cross-validation on intra-subject accuracy yielded an average of about 65% for “ideal” speeches and about 50% for “realistic” speeches. The results from ideal speeches showed promise that EEG-based BCI can be used to detect attended speech, which provided a starting point for research in other auditory representation methods and also future potential applications in cochlear implants development. However, the lackluster results from realistic speeches showed that more research in developing robust methods to detect attended speech is required. (Original title: Auditory Attention Detection in More Realistic “Cocktail Party” Scenarios Using Electroencephalogram Signals)