My name is Connor Brooks, and I am currently in my senior year of undergraduate study at Western Kentucky University. I am pursuing a major in Computer Science with a minor in Mathematics. My interests include robot motion planning and control, multi-robot coordination, and computer vision. To get in touch, send me an email at: email@example.com.
In this project, I worked with Dr. Michael Galloway to create an indoor autonomous micro aerial vehicle (MAV) using a multi-layer architecture with modular hardware and software components. We utilized environmental sensors including ultrasonic sensors, light detection and ranging modules, and inertial measurement units to acquire necessary environment information for autonomous flight. We used a three-layered system that combined a modular control architecture with distributed on-board computing to allow for fully abstracted layers of control, allowing the individual development and testing of layers. Experimental results demonstrated implementation capabilities including autonomous hovering, obstacle avoidance, and flight data recording.
A paper I wrote on this project was presented at Information Technology: New Generations 2016 and can be viewed here.
I wrote my Honors Capstone Experience/Thesis on this project, which can be viewed at http://digitalcommons.wku.edu/stu_hon_theses/631/.
I have also presented research on this project at the ACM Mid-Southeast 2015 Fall Conference and WKU's 2016 Student Research Conference.
As of Fall 2016, I have begun a research project with Dr. Michael Galloway studying cloud management of robotic fleets. Our project focuses on creating a hub, or base station, for Robots-as-a-Service (RaaS) operations. This includes creation of a custom cloud middleware for handling resources, a user interface for service requests, and a mechanism for communication with available robots. The goal of our project is for a user to be able to login to a web-based service, select from multiple kinds of data to request, enter any required information, then receive the requested data after the mission has been completed. This will require selection from available robots to choose one that will complete the mission, communication of the mission to the robot, autonomous completion of the mission while recording data, then uploading of this data to the hub once the robot has returned to its base station. The hub will then have to do any required processing of the data and make the data available to the user. I am responsible for work on the architecture of the cloud middleware, including setting up communication protocols and identifying and implementing virtual appliances necessary for successful operation of the hub.
I have presented research on this project at the ACM Mid-Southeast 2016 Fall Conference.
For my Artificial Intelligence class project, I worked on creating a tool for learning about Markov Decision Processes (MDPs) and Partially-Observable Markov Decision Processes (POMDPs). My project allows a user to create an environment, specifying many features such as the size of the environment, whether each state in the environment has deterministic or stochastic movement, rewards at each state, and starting and ending states. Once a user has created an environment, they can select gamma and threshold values to be used by a value-iteration algorithm for determining an optimal policy for the environment by solving the underlying MDP.
Users can run a simulation of an agent moving through the environment using the calculated policy. Additionally, the user can make the environment partially-observable for the agent. In this case, the agent uses range sensors which tell it the distance to the nearest wall to attempt to localize itself. The user can also specify the probability that these sensors work correctly for any given reading. When the user runs a partially-observable simulation, they can select if the agent should use either Most-Likely State heuristic or Q-MDP heuristic for approximating the solution to the POMDP. During the partially-obserable simulation, a heatmap is displayed on the environment which demonstrates the agent's belief state at each move.
To demo the project, please visit the GitHub repository here.
During the Fall 2015 semester, I led the Artificial Intelligence Special Interest Group (AI SIG) of the WKU chapter of ACM. We spent the semester developing a simple artificial life simulation with a graphical interface that allowed for visual simulations involving entities that must collect 3 different kinds of nutrients on a regular basis to survive. The behavior of each entity could be customized through using simple versions of various optimization techniques such as genetic algorithms. The development of this simulation allowed the group to learn about genetic algorithms and artificial life simulations in a visual manner.
Code for this project is available upon request.
For my Software Engineering semester-long project in Fall 2015, I worked with a team to develop two completely autonomous ground robots that would search for a target and communicate with one another when the target was found. We built the robots from scratch, selecting frames, motors, and all needed hardware. We used both an Arduino Mega board and a Raspberry Pi 2 for processing, as well as four ultrasonic sensors and a camera on each robot. Our robots were able to avoid obstacles and search randomly for targets of a specified color and size using open-source computer vision software to process frames from the camera. Once one of the robots found the target, a message was sent to the other robot through their WiFi connections, and both robots shut down.
A video can be seen here demonstrating the robots' capabilities as they search for a large, deep blue target (note: one of the robots had a motor starting to burn out while we were filming the video, so it mostly just spins in circles). A major component of this project was the development and management of project documentation, so a large amount of documentation was created. This documentation is available upon request.
In our 24-hour project at Cat Hacks (University of Kentucky's Hackathon) in Spring 2016, we utilized a Leap Motion Controller to sense motion in 3-Dimensional space. We used Wolfram Mathematica to transform the user's input in real time into dynamically visualized plots. Using this data, we provided options to:
1) Draw 3-Dimensional curves and analyze these curves
2) Draw 2-Dimensional curves and analyze these curves
3) Draw a signature and store it for use in PDF files
4) Use Mathematica's API to train a Classifier on multiple user's signature, then have it classify new signatures as they come in as one of the existing users
The first two functionalities provided tools for analyzing curves, and the last two were useful tools that could potentially be expanded to include detection of forgery.
Code for this project is available on my GitHub here.
While taking Honors Advanced Computational Problem Solving during my sophomore year, I worked with a partner to create a version of the popular arcade game "Bomberman" using Wolfram Mathematica. My partner was responsible for the development of the graphical component of the game and the overall layout of the program, while I focused on the creation of AI opponents and the mechanics of the actual gameplay. I developed five different AI opponents using various strategies all based around heuristic methods of estimating the strength of various positions, then using a breadth-first search to find the strongest position nearby while minimizing the strength of the opponents' positions. We also developed several different maps to demonstrate the effectiveness of the AI even on various gameboards.
Code for this project is available on my GitHub here (Wolfram Mathematica must be installed to run).
Since Spring 2015, I have worked as a Tutor for WKU's Computer Science Department. I tutor students primarily in introductory Java and Python classes, as well as an Object-Oriented Data Structures in Java class.
Ogden College Ambassadors:
I work as an ambassador for WKU's Ogden College of Science & Engineering. This involves attending recruiting events to talk to potential students interested in STEM majors at WKU, as well as representing Ogden College at student orientation and registration events. I have held this position since Fall 2015.
NASA Langley Research Center:
During the summer of 2016, I interned in the Research Directorate of NASA Langley Research Center. I worked in the Safety- Criticial Avionics Systems Branch developing computer vision applications for a project working on a safety-centric middleware for autonomous unmanned aerial systems. To read more about the project, visit the project page here.
During the summer of 2015, I interned at the Kentucky Mesonet- a network of weather monitoring stations based at WKU. I helped complete an overhaul of their website's entire front-end, and did research into state-of-the-art data visualization for weather services. The graphs and tables that I developed to display data can be seen on their live data pages, here.
ACM AI SIG:
I am the leader of the Artificial Intelligence Special Interest Group for WKU's chapter of the Association for Computing Machinery. This involves leading group projects each semester that help the group members learn about techniques and problems within the field of Artificial Intelligence. I have held this position since Fall 2015.
Hack the Hill Planning Committee:
As of Fall 2016, I am involved in a committee that is working to plan WKU's first official MLH hackathon. This hackathon is to be held in November of 2016, and will be a weekend event where coders and developers from all over the region will come create whatever they can invent and compete for prizes. The website for Hack the Hill can be viewed here.