Projects

Spector: OpenCL benchmarks for FPGA

High-level synthesis tools allow programmers to use OpenCL to create FPGA designs. Unfortunately, these tools have a complex compilation process that can take several hours to synthesize a single design. Understanding the design space and guiding the optimization process is crucial, but requires a significant amount of design space data that is currently unavailable or difficult to generate. To solve this problem, we have developed Spector, an OpenCL FPGA benchmark suite. We outfitted each benchmark with a range of optimization parameters (or knobs), compiled thousands of unique designs using the Altera OpenCL SDK, and recorded their corresponding performance and utilization characteristics. These benchmarks and results are completely open-source and available on our repository.

We published and presented this work at the ICFPT 2016 conference in Xi'an, China.



Maya Archaeology: Tunnel Mapping


Many Maya archaeological sites are fragile and not open to the public. We are experimenting with data collection methods to help create 3D visualizations. To enable fast real-time scanning, we are building upon mobile technologies and RGB-D sensors such as Microsoft Kinect, Intel RealSense, the Google Tango tablet, and the NVIDIA Jetson TX2 board. For the 2016 Guatemala deployment, we have developed a basic 3D reconstruction application on the Google Tango and collected data in the excavation sites. For the 2017 deployment, we have built a prototype scanning device consisting of a backpack carrying a laptop and batteries, connected to an external tablet with light and sensors.

More information on the Engineers for Exploration webpage.
Here is the corresponding poster that was presented at the UCSD Research expo 2016, and below are related videos.




Building a 3D scanner prototype for the 2017 season:


Collecting data in the archaeological site of El Zotz in Guatemala, field season 2016:

Testing custom 3D reconstruction software on the Google Tango, in Anza-Borrego mud caves, before the 2016 season:




KinectFusion on FPGA


This work is based on KinectFusion, a project developed by Microsoft Research. You can use a Kinect camera to reconstruct your environment in 3D in real-time, just by holding the camera and moving around. However this program requires a modern GPU that uses a lot of power. We want to run it on a more power-efficient hardware and hopefully get to 3D reconstruction for embedded systems. We are modifying the open-source version of KinectFusion, Kinfu, to make it run on a FPGA, by using high-level tools such as the Altera OpenCL SDK. The program is divided into three parts: Iterative Closest Point (ICP) for camera motion tracking, Volumetric Integration (VI) to build the 3D model, and Ray Tracing for screen rendering. We have integrated the ICP algorithm on an FPGA to make an hybrid GPU/FPGA application run in real-time, and we are working on optimizing VI to run efficiently on the FPGA.

We published and presented our work at the ICFPT 2014 conference in Shanghai.



This video presents the project and was created as part of a classwork requirement: