My name is Lawrence, and I recently graduated with a Bachelor's in Computer Science and a Bachelor's in Mathematics from the University of Chicago. I love to program, read about deep learning, travel, play sports, and much more! I'm excited about the capabilities of deep learning—I love to stay up-to-date on research papers and applications in industry. My recent interests include federated learning, LLM fine-tuning, and video captioning. I hope to be a startup founder working in deep learning. More importantly, I want to stay healthy and have caring and supportive friends and family. Click on the other tabs to learn more about me.
I worked with Dr. Lingxiao Wang and the Toyota Technological Institute of Technology on a private federated machine learning algorithm. Dr. Wang developed the theoretical proofs of the privacy guarantee of the algorithm, while I ran experiments that demonstrate its efficacy. My role involved repurposing PyTorch code from other literature for our algorithm, as well as writing up the experiments section of the paper. I co-authored a paper that was accepted to the Privacy Regulation and Protection in Machine Learning Workshop at ICLR 24, and I presented our work at the workshop. We plan on submitting to a conference in the future.
Hewlett Packard Enterprise
Skills
Java
Kubernetes
Github Workflows
Agile Development
PostgreSQL
Jenkins
Unit Testing
Grafana
Micrometer
Prometheus
REST APIs
JaCoCo Test Coverage
Humio Log Analysis
At Hewlett Packard Enterprise (HPE), I developed code for a Kubernetes microservice on an agile team. I had the full responsibility of a regular member of the team, meaning that I attended standup meetings every other morning and was responsible for working on multiple projects simultaneously.
I took initiative in order to contribute as much as I could during my internship. At the beginning of the internship, I proactively reached out to IT when it was taking a long time to get my laptop. When I was waiting for an update on my projects and had nothing to work on, I asked my manager for a new assignment. Towards the end of my internship, I dedicated a lot of time to writing documentation so that my team could understand my code after I left.
I worked on projects that included developing a GitHub workflow that displayed the test coverage of the codebase, writing a program that checks for errors in the microservice databases, deploying a Grafana dashboard and adding new Prometheus metrics to be displayed in the dashboard, and adding new parameters to the REST APIs of the microservice.
There were many steps involved in getting code to production. I wrote unit tests for all my code. I then deployed my code to test servers using Jenkins. I viewed the logs using Humio and verified the data in the PostgreSQL databases of the test servers to make sure my code was functioning correctly. After testing my code, I created a merge request with comments explaining the changes and showing that my code was working. My teammates reviewed my merge requests, and they sometimes suggested changes that I implemented into my code.
I was able to contribute significantly to my team as I wrote over 1,000 lines of code that was deployed to production.
Argonne National Laboratory
Skills
ROS
Python
Gazebo Simulations
MoveIt Motion Planning
URDFs
Mask R-CNN
PyTorch
Matplotlib
At Argonne, I worked with Dr. Young Soo Park and PhD student Derek Vasquez to develop and program a robot in a Gazebo simulation. The simulation was the first step in a project to use a real life version of the robot to automate chemical experiments in a laboratory.
There were many pieces that I had to put together in order to program a working robot in the simulation. First, I had to construct a URDF model of the robot which specifies the visual appearance and collision bounding boxes of the robot. I added transmissions and controllers to allow the robot to be controlled in Gazebo by publishing to ROS topics. I used MoveIt for inverse kinematics that allowed the arm to be positioned easily. I also integrated a library that allowed the robot to navigate and map terrain using its 3D sensors. I used a computer vision library to enable the robot to locate objects with AR tags on them. The gripper presented issues with picking up items because friction is finicky in Gazebo, so I used another library to programmatically attach the gripper to items to pick them up. In the end, I was able to program the robot to locate and pick up objects in the simulation, which was an important step towards automation with the robot in a real life laboratory. My team was able to use the simulation to test procedures before running them in real life.
I learned a lot about good coding practices during this experience. I wrote detailed documentation so that researchers without much experience with robotics could adapt my project to their own purposes. I learned from Dr. Park the standard practices of organizing files and directories of robotics projects.
I also briefly worked with using machine learning to compute the effectiveness of catalysts. This was done by running a Mask R-CNN model on images of bubbles that were produced in chemical reactions in order to compute their size and quantity. Larger, greater quantity bubbles signified more effective catalysts. I worked with graphing the testing and training loss of the model and determining how they varied with the training set size and number of epochs trained.
My experience at Argonne was eye-opening because I got hands-on experience applying computer science to different areas of scientific research. I also learned that the field of robotics is extremely complex and well-developed.
The HRI Lab at UChicago
Skills
ROS
Python
Websockets
Microsoft Kinect
Misty Robot
Vector Robot
At the Human Robotics Interaction Lab at the University of Chicago, I worked on writing ROS wrappers and demos for numerous robots. I collaborated closely with Spencer Ng and worked under the supervision of Dr. Sarah Sebo.
I wrote ROS wrappers to allow the robots to be controlled using ROS because it is the standard in HRI. Originally, the Vector robot was controlled using a Python library which connected to the robot, and the Misty robot was controlled using a REST API. I learned how to connect to these robots and wrote ROS topics that gave access to the robots’ full range of functionality. I also worked with websockets for continuous data flow on the Misty robot. I wrote demos for the Vector robot that showed its ability to pick up a cube and detect touch and light.
As I did not come into the lab with much experience in robotics, I took initiative in order to pick up the skills I needed. After the first lab meeting, I reached out to my coworker Spencer, who had more robotics knowledge. I set up weekly meetings where he walked through some of the code he wrote and we programmed together. By doing these meetings and working through ROS tutorials in parallel, I was able to pick up the skills needed to contribute to the lab. I was able to independently work on a project afterwards.
I worked on programming the Misty robot independently, and I took initiative when it was malfunctioning. I noticed that some of the API calls for the robot weren’t working, so I did some debugging to try to fix the issue. Then, I posted on the forums for the robot, reached out to the manufacturers, and had a call with them to diagnose the problem. When the problem still wasn’t fixed, I reached out to my supervisor and arranged for the robot to get shipped back to be repaired. I took matters into my own hands rather than relying on my supervisor or coworker to do things for me.
I had fun working with the robots in the lab. My supervisor let Spencer and I borrow a Vector robot, and I programmed it to do some cool tricks. It was quite a spectacle among my friends. I also learned that even when a job seems difficult and intimidating when I begin it, I am able to pick up skills quickly and make great contributions.
A few years ago, I was really into chess. A friend told me that he made a chess game that let two people play each other in chess, and I decided that I had to one-up him by making a chess engine that searched for an optimal move on top of a playable chess game. To do this, I started out by writing the game rule engine that determines which moves are viable. This was actually very complicated due to rules like en passant, checks, and castling. Then, I wrote the graphics interface using the Java graphics library and made it responsive to clicks from the user. Then, I wrote a minimax search algorithm using the game rule engine to find valid moves and an evaluation function that adds up the values of one player’s pieces and subtracts the values of the opponent’s pieces, giving higher weight to pieces in the center of the board. The engine wasn’t very efficient and could only search around five moves ahead in a reasonable time, but it could beat most of my friends, and it was really cool that it was able to see many good moves that I missed.
I made a website that lets users test the legitimacy of day trading.
The first game, Beat the Market, allows users to trade a single stock using historical stock data. When the user enters the game, the client uses the Alpha Vantage API to fetch stock price data. 30 data points of stock data are displayed in a graph using ChartJS. Using the setInterval() method, a function that updates the stock graph and displayed metrics is called every second. I separated the code for the engine which does calculations from the code that updates the display. I also wrote a method for buying and selling the stock and getting its price. This allowed me to easily implement automated trading algorithms by calling these methods.
The second game, Odd One Out, allows the user to guess between real and fake graphs of stock data. For this game, data of multiple stock tickers is needed within a short time period, which the Alpha Vantage API prohibits. Thus, I used AJAX to fetch from a database of stock data stored in the backend, which was written in Flask. AJAX allowed the webpage to update smoothly without having to reload.
I created a reusable navigation bar that is displayed using Javascript. I also used CSS and HTML techniques to organize the layout in an appealing way. I was able to combine multiple tools in order to create a functioning website.
I created and published an app in the Google Play Store with 1,000 downloads.
The app uses the Google ARCore library to measure the distance to objects with the phone camera.
Originally when I published the app, it received less than 10 downloads in a month. To increase downloads, I used AppRadar to perform App Store Optimization, which involved changing the app name, improving the description, creating a new app icon, and adding a video demo of the app. I also analyzed bug reports from user devices in order to fix bugs and added an in-app popup that asks the user to review the app and tested this feature with an internal testing release. In 6 months, my app obtained 1,000 downloads.
After my friend told me about a freelancing website called Pangaea, I checked it out to look for some programming projects to fill my free time. Some of the positions paid pretty well, so I decided to apply for them. I was politely rejected from the first position I applied for, but to my surprise, I ended up getting the other one I applied for. My boss, Marcelo, gave me some content posted on a bare-bones HTML website and asked me to move it to a WordPress website. Even though I originally sought to gain experience in programming, this project was more art than programming. In fact, I wrote zero lines of code. The experience was a good way to pass time though. I didn't mind sitting there and clicking on a bunch of text boxes and copying text into them and making the colors and images look good. At least I can say that I changed the server the domain name was pointed to, which is kind of technical I guess! Now, I know that I can make WordPress websites pretty easily, which might come in handy now and again.
Fashion Trend Detection
Skills
Keras
SLURM
GPU Computing
Cloud Computing
I’m currently working on a project to detect fashion trends using deep learning models. This is done by using unsupervised classification of images of clothing items from Instagram posts and detecting changes in the frequency of items in each category over time.
First, I obtained a Keras implementation of the Mask R-CNN model pre-trained on clothing items. Using this model, I obtained 10,000 images of individual clothing items cropped from larger photos. Then, I used a Keras implementation of the SimCLR algorithm to conduct unsupervised clustering on these images of clothing items. At first, I tried to run the algorithm locally, but I was running out of space.
Thus, I used a node in a remote SLURM cluster with 60 GB of memory, which was able to store all 10,000 images at once. I accelerated training speed with a GTX 1080 TI GPU, but ran into problems with GPU memory, so I utilized Keras generators to load data onto the GPU in chunks. I trained for 8 hours on the GPU and displayed neighboring items of clothing, according to the model. The neighboring items of clothing often belong to the same category, so there is promise in this technique.
I am still working to complete this project.
GraphTex
Skills
PyTorch
Flask
For a hackathon last year, I worked on a project to convert images of hand-drawn graphs to LaTex code. We used a DETR object detection model to locate nodes and edges of a graph, and we fed the locations of the edges and nodes into an algorithm that outputs LaTex code. We put all this into the backend of a website. This project required putting together numerous parts: synthetically generating training data, training the DETR model, writing the algorithm that generates LaTex code, and creating the website backend and frontend. We did all of this in under 24 hours in a team of 4, which I believe is an extremely rapid pace.
I am especially proud of my individual contribution to the project. I started out writing code that synthetically generates training data using NetworkX. NetworkX generates images of graphs and outputs coordinates of nodes in the graph, but there is considerable difficulty into translating the coordinates into data that an objection detection model can train on. After finishing this, I assisted with the website frontend and backend, and wrote the algorithm that converts object detection results into LaTex code. Because of my rapid pace of development, I took significant ownership in this challenging project, and thus I am proud of my achievement.
UChicagoGPT
Skills
Hugging Face
I fine-tuned Llama 3 on UChicago subreddit data.
The first step of this project involved creating a dataset for the model to train on. I was saved from having to scrape the subreddit because there are online databases of subreddit posts that I was able to download from. I used a tree data structure with a DFS and backtracking algorithm to format the data as user-agent conversations, because Reddit threads are in a tree-like structure. Once the dataset was created, I uploaded it to Hugging Face.
Then, I used the Hugging Face TRL library to fine-tune Llama 3 on the data. I did this on a SLURM cluster with 4 A4000 GPUs in parallel. Here is the trained model.
I deployed the trained model on Runpod serverless, because serverless options are cost-effective for low-usage models.
The whole process of fine-tuning and deploying the model was extremely streamlined, showing me how efficient modern-day ML is.