REU 2020

Program Overview:

The 2020 REU program is funded by the NSF and designed to bring together undergraduate students from different universities to work together on research projects. At the end of the ten week program, each project group prepares a poster, a research paper, and a final presentation.

Program duration: 5/26/2020 (Tuesday) - 7/31/2020 (Friday)

Faculty mentors:

Dr. Prasad Calyam is an Associate Professor in the Electrical Engineering and Computer Science Department. He also serves as the Director of MU’s Cyber Education, Research and Infrastructure Center. He is an expert in computer networking, cloud computing, cyber security, multimedia applications, and network measurement. He has published over 140 peer-reviewed publications in reputed conferences and journals, and has led development of several open-source software packages. He has graduated 4 PhD students, supervised 5 Postdocs, and he is currently supervising 4 PhD students. He has recruited, funded and mentored over 50 graduate students and 40 undergraduate researchers. He is a Senior Member of IEEE. His research sponsors include: NSF, DOE, ARL, VMware, Cisco, Dell, Verizon, IBM, Huawei, Internet2, and others.

Dr. Yi Shang is a Professor and the Director of Graduate Studies in the Electrical Engineering and Computer Science Department. He has extensive research experience on wireless sensor networks, mobile computing, and artificial intelligence, and has published over 190 journal and conference papers and received 6 patents. He has graduated 9 Ph.D. students and over 60 M.S. students and supervised over 50 undergraduate researchers. He is currently supervising 9 Ph.D., 5 M.S., and 3 undergraduate students. His research has been supported by NSF, NIH, US Army, DARPA, Microsoft Research, Raytheon and Missouri Dept. of Conservation. He helped coordinate previous REU programs as a faculty mentor and Program Director.

Dr. Marjorie Skubic is a Professor and the Director of the MU Center for Eldercare and Rehabilitation Technology (CERT), with projects funded by the NSF, NIH, NLM, AHRQ, and U.S. Administration on Aging. Several CERT projects have focused on monitoring older adults through a network of sensors placed in the home including PIR motion sensors, a bed sensor that captures pulse, respiration, and sleep restlessness, and gait analysis systems using Kinect depth cameras, webcams, and radar. She has published over 250 papers, currently supervises 10 graduate students, and graduated 12 Ph.D. students, 28 M.S. students, and worked with over 150 undergraduate researchers. Many have continued on with graduate studies.

Dr. Kannappan Palaniappan is a Professor in the Electrical Engineering and Computer Science Department. He is a Director of Computational Imaging & VisAnalysis (CIVA), with projects funded by NSF, Army Research Lab, Air Force Research Laboratory, NASA, and NIH. He served as a National Academies Jefferson Science Fellow and received the NASA Public Service Medal. His research covers computer vision, high performance computing, data science and biomedical image analysis. He has published over 250 peer-reviewed publications. He is currently supervising 8 Ph.D. students and 2 Research Scientists, and has mentored over 15 PostDocs, 11 PhD students, 27 MS students and over 30 undergraduate students.

Evaluator, Dr. Jane Howland is a Teaching Professor and the Learning Technologies Program Director for the School of Information Science & Learning Technologies. She is currently working with 6 Ph.D. students and advises all Master's and Educational Specialist students in the Learning Technologies’ Online Education emphasis area. In her role as Learning Technologies Program Director, she is responsible for data collection, analysis, and program evaluation.

Graduate Student Coordinator:

Vaibhav Akashe
Vaibhav Akashe - University of Missouri, Columbia

Vaibhav Akashe received his Bachelor of Engineering degree in Information Technology from Savitribai Phule Pune University, India in 2016. He is currently pursuing his MS degree in Computer Science at University of Missouri-Columbia. His current research interests include Cloud Computing, Cloud Security for AR/VR, Virtual Reality, Cyber Security and Machine Learning. He can be reached at.

Student Researchers:

Alec Lee James - University of California, Berkeley

Alexander Riddle - University of Missouri-Columbia

Angel Herrera Flores - University of Puget Sound

David Falana - Rutgers University

Helen Chen - University of Maryland, College Park

John Kenneth Lewis - Florida Southern College

Maxwell Chappell - Truman State University

Micheal Erwin Fisher - Columbia College

Robert Ignatowicz - Stony Brook University

Sarah Elizabeth Emerson - Samford University

Work schedule
  1. MTuThF: 8 hours, 9am-12pm and 1-6pm virtually.
  2. You are required to sign up a time sheet posted virtually every day.
  3. Use the Microsoft Teams channel to interact virtually with the REU Site group.
Weekly program meeting: every Tuesday, 10-12pm or 2-4pm, via Zoom. All mentors are welcome to attend.

Weekly research journal of each team due: 5pm every Friday - report daily research activities.


10:00 am - 10:30 am
10:30 am - 11:00 am
11:00 am - 12:30 pm
3:00 pm - 4:00 pm
4:00 pm - 5:00 pm
Orientation & Welcome
Student Introductions
REU Project Presentation by REU mentors
Project Team Assignments
Pre-survey, paperwork, self-study
10:00 am - 11:30 am
2:00 pm - 4:00 pm
Why & how to do research by Dr. Calyam

  1. How to read and write research papers
  2. How to prepare and give a technical presentation
  3. Computer science ethics. IRB training.
    by Dr. Shang
10:00 am - 11:30 am
1:00 pm - 2:30 pm
Software Defined networking by Dr. Calyam
Computational Imaging & VisAnalysis by Dr. Pal
10:00 am - 11:30 am Eldercare and Rehabilitation Technology for Better Health by Dr. Skubic
10:00 am - 12:00 pm
2:00 pm - 3:00 pm
Weekly program meeting - project assignment. All graduate mentors are encouraged to attend.
led by Dr. Calyam
Dr. Calyam Tech Talk
1:00 pm - 3:00 pm MU Data Center Virtual Tour by Bill McIntosh and Tim Middelkoop
10:00 am - 12:00 pm

2:00 pm - 3:00 pm
Weekly program meeting: Team project presentation (project goals, milestones, OKRs, initial literature survey results)
led by Dr. Pal
Tech Talk
Dr. Pal
8:30 am - 9:20 am Virtual Visit to Columbia Public Schools EEE summer school, grades 6-8
co-ordinated by Ms. Heidi Barnhouse
10:00 am - 12:00 pm
2:00 pm - 3:00 pm
Weekly program meeting
led by Dr. Shang
Tech talk
led by Dr. Shang
1:00 pm - 3:00 pm IQT Labs Virtual Tour
10:00 am - 12:00 pm
2:00 pm - 3:00pm
Weekly program meeting
led by Dr. Skubic
Tech Talk
led by Dr. Skubic
10:00 am - 12:00 pm Weekly program meeting- Midterm presentation
Midterm report in IEEE latex style due
led by Dr. Calyam
10:00 am - 12:00 pm Weekly program meeting
by Dr. Pal
10:00 am - 12:00 pm Weekly program led by
Dr. Skubic
10:00 am - 12:00 pm Weekly program meeting
(Abstract Submission for Poster Forum)
led by Dr. Shang
10:00 am - 12:30 pm Final presentation
Final report in IEEE latex style due
led by Dr. Calyam
10:00 am - 3:30 pm Poster Forum (MU Summer Undergraduate
Research and Creative Achievements Forum)
10:00 am - 12:00 pm Post-survey, wrap-up

NOTE: We encourage students to consider submitting their final project reports as papers (full papers, or short papers) to venues such as: the annual National Workshops for REU Research in Networking and Systems (REUNS), IEEE CCNC, IEEE ICNC, IEEE NCA and others with July/August 2020 deadlines.

Computer networks basics

Research advice

Software-Defined Networking related materials

Computer Science Ethics

IRB training ( to receive IRB certification

  • Responsible Conduct of Research courses
  • Human Subject Research courses

Latex resources

Prior REU project examples as best practices to follow:

  • Attack-Defense and Performance Adaptations for Social Virtual Reality Learning Environments
    Advisors: Dr. Prasad Calyam / Dr. Khaza Hoque
    Graduate Student Mentors: Samaikya Valluripally, Vaibhav Akashe
    Research Students: David Falana, Michael Fisher
    Abstract: Social virtual reality learning environment (VRLE) allows one to be virtually present in an immersive manner, and increases accessibility to remote learning. VRLE applications in critical domains (e.g. military training, medicine, education) demands continuous streamlining of data delivery along with immersiveness for the users. Lack of maintaining robustness and high performance in such socio-technical systems, will lead to high disruption of users with physical disabilities (e.g inducing cybersickness) and application functionality (e.g. content delivery issue). In this paper, we present a novel adaptive framework that jointly tunes performance and robustness factors using a `DevSecOps' paradigm for a social VRLE application. Using a social VRLE application case study viz., vSocial we characterize the robustness factors {Security, Privacy and Safety (SPS)} and performance factors {Quality of Application, Quality of Service, Quality of Experience (3Q)} that affects user experience and cybersickness. For this, we develop an anomaly-monitoring tool to collect and classify the anomaly data into 3Q and SPS factors in our proposed framework. Next, we utilize a novel decision module that relies on dynamic decision making for the suitable adaptation using quantifiable metrics i.e., Suitability Metric, Cost, Cybersickness, Response Time. To facilitate an iterative adaptive control loop mechanism in our proposed framework, we specifically use a priority based queuing model, to determine the state of the VRLE applications; reduce the waiting delays and incorporate the adaptations for the most severe SPS/3Q anomaly events before it can disrupt the user safety (induce more cybersickness). Based on our experimental results in an AWS testbed setup,we enlist the best practices to implement for a range of simulated SPS/3Q anomaly events in realistic social VRLE applications. Our results also detail the benefits of our proposed adaptive control loop based framework by performing trade-off analysis of our priority queuing model with the state-of-the-art approaches, in terms of performance overhead and usability metrics (Response time, Cybersickness). Lastly, we show the effectiveness of our framework for several SPS/3Q scenarios and illustrate the impact of incorporated adaptations on the cost, resource usage, cybersickness metrics. Based on our results, we demonstrate how our solution takes decisions about the mitigation strategies dynamically and thereby developing a more secure and safer operational social VRLE.

  • Development of Multi-Drone Coordinated Path Planning Methods for Aerial Image Collections for Wetland Monitoring
    Advisor: Dr. Yi Shang
    Graduate Student Mentors: Robert Tang, Yang Zhang
    External Mentors: Joel Sartwell, Andy Raedeke
    Research Students: Alec Lee James, Angel Herrera Flores
    Abstract: Drone technology has proven to be helpful in the automation of various monotonous tasks, ranging from search and rescue to crop monitoring. In our case, we wish to expand the capabilities of a dynamic-height single-drone algorithm for area coverage path planning to multiple drones. We propose two algorithms to plan both paths and height management for a team of quadrotor drones trying to spot and count birds in various distributions within an area enclosed by an arbitrary polygon. We propose a solution to a case in which bird locations are known to follow a certain set of density distributions. We split the area into two sub-regions of high and low density to be traversed differently from one another. Our cooperative approaches aim to reduce the time it takes to cover all aforementioned birds while at the same time increasing counting accuracy when compared to a single-drone approach, and a naive multi-drone approach that does not adapt to density. Our measurement of utility will be through accuracy, and time and energy spent.

  • Truth, Trust and Transparency in Synthetic Media
    Advisors: Dr. Prasad Calyam, Dr. Kannappan Palaniappan
    Graduate Student Mentor: Imad Eddine Toubal
    Sponsors: Vishal Sandesara, Zigfried Hampel, Arias, Michael Lomnitz
    Research Students: Helen Chen, John Kenneth Lewis
    Abstract: Authenticity of digital media has become an everpressing necessity for modern society. Since the introduction of Generative Adversarial Networks (GANs), synthetic media has become increasingly difficult to identify. Synthetic videos that contain altered faces and/or voices of a person are known as deepfakes and threaten trust and privacy in digital media. Deepfakes can be weaponized for political advantage, slander, and to undermine the reputation of public figures. Despite imperfections of deepfakes, people struggle to distinguish between authentic and manipulated images and videos. Consequently, it is important to have automated systems that accurately and efficiently classify the validity of digital content. Many recent deepfake detection methods use single frames of video and focus on the spatial information in the image to infer the authenticity of the video. Some promising approaches exploit the temporal inconsistencies of manipulated videos; however, research primarily focuses on spatial features. We propose a hybrid deep learning approach that uses spatial, spectral, and temporal content that is naturally coupled in a consistent way to differentiate real and fake videos. In this work, we build a computationally efficient cloud-ready multimodal system to detect deepfake videos. We evaluate the performance of our proposed system compared to recent approaches, in terms of accuracy and speed, on the Facebook Deepfake Detection Challenge and FaceForensics++ video datasets.

  • Personalizing Health Messages in an Automated Health Alert System Using Deep Learning and Natural Language Processing
    Advisor: Dr. Marjorie Skubic
    Graduate Student Mentors: Anup Mishra
    Research Students: Maxwell Chappell, Sarah Elizabeth Emerson
    Abstract: Electronic health records (EHR) are complex and contain both structured (e.g. physiological measures) and unstructured (e.g. nursing notes) health data. Studies show that EHR nursing notes contain critical health information, including fall risk factors in older adults. Older adults age 65 and above are at higher risk of fall. Predicting fall risk early could provide caregivers enough time to provide interventions. Several fall risk prediction models have been proposed in the literature; however, an exploration of fall risk prediction using nursing notes is missing. In this study, we explore deep learning architectures to predict fall risk in older adults using nursing notes in the EHR. In this IRB-approved study, we used EHR data obtained from 162 older adults at TigerPlace, a senior living facility located in Columbia, Missouri. The data included de-identified free-text nursing notes and medications. We pre-processed the data by keeping clinically relevant words. We used pre-trained word embedding models, specifically BioWordVec, and GloVe to train the models. We explored several deep neural architectures and evaluated them to test the effectiveness of each model in predicting future falls. Preliminary experiments show that the LSTM-based deep neural models were most effective in predicting future falls with a sensitivity of 0.72, specificity of 0.67, and a prediction accuracy of 0.75. The model used six months of nursing note data to predict future falls in the next two months. We observed that deep learning models performed better in predicting future falls in a shorter time range as compared to falls in distant future. In addition, the BioWordVec word embedding model was able to capture 17% more clinically relevant words in the text data when compared to GloVe. This exploratory analysis provides groundwork on the use of word embeddings in predicting fall risk from nursing notes.

  • Enhancing Network-edge Connectivity and Security in Drone Video Analytics
    Advisors: Dr. Prasad Calyam, Dr. Kannappan Palaniappan
    Graduate Student Mentors: Alicia Esquivel, Chengyi Qu, Deniz Kavzak Ufuktepe
    Research Students: Alexander Riddle, Robert Ignatowicz
    Abstract: Unmanned Aerial Vehicle (UAV) systems with high-resolution video cameras are used for many operations such as aerial imaging, search and rescue, and precision agriculture. Multi-drone systems operating in Flying Ad Hoc Networks (FANETS) are inherently insecure and require efficient and end-to-end security schemes to defend against cyber-attacks (i.e., Man-in-the-middle (MITM), Replay and Denial of Service (DoS) attacks). In this work, we propose a cloud-based, intelligent security framework viz., “DroneNet-Sec" that provides network-edge connectivity and computation security for drone video analytics to defend against common attack vectors in UAV systems. The proposed framework includes three main research thrusts: (i) a secure hybrid testbed management that synergies simulation and emulation via an open-source network simulator (NS3) and a research platform for mobile wireless networks (POWDER), (ii) an intelligent and dynamic decision algorithm based on machine learning to detect anomaly events without decreasing the performance in a real-time FANET deployment, and (iii) a web-based experiment control module that features a graphical user interface to assist experimenters in the execution/visualization of repeatable and high-scale UAV security experiments. Our performance evaluation experiments in a holistic hybrid-testbed show that our proposed security framework successfully detects anomaly events and effectively protects containerized tasks execution in drones video analytics in a light-weight manner.

LightsSponsored by the National Science Foundation (Award CNS-1950873)

LightsHosted by the University of Missouri