Many of my interests revolve around combining Computer Science and Psychology.
My PhD dissertation topic involved datamining Mozilla and KDE, two large Open Source communities. Through analysis of Git version control history, email mailing lists, and source code files, this parsed information was stored in the graph database Neo4j. Using Social Technical Congruence (STC) as a core starting point, three algorithms were developed to help developers better understand the social and technical structure of software development. The field of Social Technical Congruence (Cataldo et al. 2006) attempts to understand and model the synergies between technical development and communication. It is able to identify gaps in the communication structure based on shared technical dependencies of developers to help minimize future bugs and keep up with changes. Three novel algorithms were developed to help developers:
The three algorithms were implemented in a program and website called Jamii. The program provides all backend processing of data from source code version control and email mailing lists as well as visualization. The Jamii website was then tested against the same two communities. Two surveys examining Mozilla and KDE were performed that gathered qualitative and quantitative data. The first survey gathered demographic and perceptual differences in socialization between beginner and veteran members. The second survey investigated the perceived usefulness of the website as well as the poential for improving the socialization process.
I've worked with Dr. Judy Vance and other graduate students on research using various configurations of bimanual haptic devices to examine their effectiveness in virtual assembly tasks. One of the haptic devices we use is the Geomagic Touch (formerly Sensable Phantom Omni). This research aims to improve the training of assembly tasks through the utilization of haptic devices, virtual reality, and accurate physics calculations. We've also examined how much learning transfer takes place in using this technology for virtual training. You can find more details, images, and videos here.
Over the summer of 2010, I coordinated and co-advised three undergraduate Research Experience for Undergraduates (REU) students in a project to create a physical mockup of a shopping cart. This hardware was then used in a virtual reality user study utilizing the C6 at ISU. We have since updated the hardware to use an Arduino and a linear encoder for user control. You can find more details, images, and videos here.
My undergraduate Psychology capstone involved two semesters of work. The first semester was spent creating an experiment, gathering the required materials, and making sure the design was solid. The second semester I implemented the experiment and analyzed the data. The experiment focused on differences in usability and appeal between a 2D versus 3D Linux desktop. The hypothesis was that the 3D desktop would be more appealing as well as more usable. The data collection involved participants taking surveys about their prior technology experience and desktop usage and then performing a variety of tasks while being timed. Additional surveys were used to examine their thoughts on the differences between the two desktops. No usability difference was found between the 2D and 3D desktops, however, participants thought the 3D desktop was more visually appealing. The interesting result was that participants preferred the 3D desktop over the 2D desktop but only if they experienced the 2D desktop first. I was able to explain this result using cognitive load theory from my cognitive Psychology class. I presented a poster of my results at the Midwest Psychology Association conference in May of 2008.
If you can't find a paper and would like a PDF, please contact me.
Recently I've been having a lot of fun with Kaggle and playing with the large datasets available via the competitions. Kaggle is a website that hosts data science and data analytics competitions from companies and organizations around the world who sponsor and provide the data and challenge. The goal usually is to work with a training set of known data and then generate a prediction for a testing set. My Kaggle profile shows the competitions that I've participated in. Most of the programming to extract features and run machine learning algorithms is done using Python and libraries such as Pandas, scikit-learn, numpy, and SciPy.
This was an individual homework assignment for the HCI 575: Computational Perception class. OpenCV (Open Source Computer Vision Library) is an extremely popular library for image analysis and manipulation. The homework assignment was to write code to track very small worms in various video clips. This was a real problem that was provided to the professor by graduate students in Biology at Iowa State. The source code can be found here on Github. The video below shows the original video on the left and the tracked worms and red "breadcrumb" trails on the right.
This was an individual project for the ComS 610: seminar class (Nvidia CUDA Programming). CUDA is Nvidia's library for doing high speed parallel calculations on graphics processing units (GPUs). Image segmentation is the process of simplifying an image to object boundaries. The source code can be found here on Github. An example of an original image and segmented image can be seen below.
This was a semester long group project for ComS 510X (Distributed Software Development). The project spanned three different countries and universities. Students from Iowa State University in the US, Jilin University in China, and the Federal Universidade da Bahia in Brazil participated. The project was to create an Android based facial recognition program for storing names and contact information. At Iowa State, we worked on the front-end Android application which interfaced with the backend facial recognition systems in China and Brazil. This project demonstrated the benefits and challenges of geographically distributed software development.
ThinkSpace is an online collaboration and instructional system for students and teachers developed at Iowa State University. As part of a group project for HCI 596 (Emerging Practices in Human-Computer Interaction), we spent the semester getting an understanding of the users and how the navigation of the website could be improved. Low fidelity prototypes were mocked up on paper and tested in class. Medium fidelity prototypes were developed using the software Balsamiq. Evaluations were done with users (both teachers and students) along the way using UserZoom and Just in Mind software. Users were evaluated on a series of common tasks which were timed as well as measured for errors. The System Usability Scale (SUS) and other questions were gathered using Qualtrics. A final higher fidelity prototype was created with Omni Graffle and presented to the developers and management of ThinkSpace. Our group focused on improvements to the tabbed navigation to ensure page consistency.
As VR and gaming hardware advances, various off-the-shelf hardware can be used as inexpensive tracking and interaction devices. The Razer Hydra uses magnetic tracking to identify the position and orientation of the controllers. By taking one controller apart and attaching it to a pair of glasses, a user can inexpensively create a head-tracked viewpoint and an interaction wand. The Kinect from Microsoft combines an infrared (IR) depth camera and RGB camera to create a 3D view of the user. This information can then be processed to identify the skeletal and head position and orientation. Lastly, the Nintendo Wii Remote (Wiimote) can be used with a pair of LED glasses to track the user's head position and orientation. This work combines a wide variety of software developed by graduate and undergraduate students in Dr. Judy Vance's lab. You can find more details here. This hardware and software was demonstrated at IDETC-CIE 2013.
The last three years as an undergraduate, I participated in the annual Mathematical Contest in Modeling competition. Teams of three members chose between three different problems and modeled a solution for the problem in only four days. All three years the team I was on achieved honorable mention. The following is the problem we picked for 2007 and the paper we submitted.
PROBLEM B: The Airplane Seating Problem
"Airlines are free to seat passengers waiting to board an aircraft in any order whatsoever. It has become customary to seat passengers with special needs first, followed by first-class passengers (who sit at the front of the plane). Then coach and business-class passengers are seated by groups of rows, beginning with the row at the back of the plane and proceeding forward. Apart from consideration of the passengers' wait time, from the airline's point of view, time is money, and boarding time is best minimized. The plane makes money for the airline only when it is in motion, and long boarding times limit the number of trips that a plane can make in a day. The development of larger planes, such as the Airbus A380 (800 passengers), accentuate the problem of minimizing boarding (and deboarding) time. Devise and compare procedures for boarding and deboarding planes with varying numbers of passengers: small (85-210), midsize (210-330), and large (450-800). Prepare an executive summary, not to exceed two single-spaced pages, in which you set out your conclusions to an audience of airline executives, gate agents, and flight crews. Note: The 2 page executive summary is to be included IN ADDITION to the reports required by the contest guidelines. An article appeared in the NY Times Nov 14, 2006 addressing procedures currently being followed and the importance to the airline of finding better solutions." The article can be seen here.
I spent two summers as an undergraduate researching machine learning in New Mexico at the University of New Mexico under the direction of Dr. Terran Lane. The first summer I learned the basics of machine learning and made some simple programs to make sure I understood reinforcement learning (Q-Learning). One involved creating a program that had an agent traversing a grid world in order to get to a destination. My first summer I attended the Association for the Advancement of Artificial Intelligence (AAAI) conference in Boston.
After attending the conference, I chose a research problem to focus on. I was interested in networking technology and found a paper by Justin Boyan and Michael Littman entitled, "Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach", explaining how machine learning could be used to optimize packet routing in a computer network. After programming and doing research the subsequent summer, I was able to replicate their results. I attended the AAAI conference the second summer, this time in Vancouver.
The file below is a presentation I gave at Simpson college regarding the work.
REU Presentation: Can the Internet learn? (.pdf) - 2007
If you would like a copy of my unofficial undergraduate or graduate transcript, please contact me.