---

University of California, Santa Barbara
Department of Electrical and Computer Engineering

---

EE Capstone Projects Page

ECE 188A/B: 2011–2012

Instructor: Dr. Ilan Ben-Yaacov

188A Schedule: Mon/Wed 2:00-2:50pm, PHELPS 1431

---

 

Projects Info:

Welcome to the EE Capstone Projects Page (click here to return to the EE Capstone Home Page)!  There are a number of exciting EE Capstone projects that student groups are working on during the 2011-2012 academic year.  Each project is sponsored either by a research group at UCSB or by one of our industry partners.  Each student group is made up of 2-4 students.  While groups are independently responsible for working on and completing their projects, each group is assigned a mentor, typically from the group which is sponsoring the project, to provide technical guidance and assistance.  Student groups work on their projects throughout the entire 2011-2012 academic year, and then present their work at the 2012 ECE/CS Capstone Presentation Day on June 7.  The 2011-2012 EE Capstone Projects are described below:

 

 

Project Descriptions:

 

1.  Sponsor: Prof B.S. Manjunath’s Research Group

Student Team: Brandon Gomez, Jon Waltman, Jay Wright

Mentors:  Prof B.S. Manjunath, Carter De Leo

 

Project Description: HCI with Microsoft Kinect Sensors

As camera sensors, computation and storage have decreased in price, interest has grown in the deployment of large scale camera networks consisting of hundreds or thousands of nodes monitoring a large geographic area. Part of our work in Dr. Manjunath’s Vision Research Laboratory focuses on the challenges associated with processing such large volumes of video data to extract useful information. To this end, we’ve installed an experimental camera network consisting of over one hundred sensors covering parts of the UCSB campus. However, in addition to developing new computer vision algorithms to operate on the network data, it is also important to consider how the information is presented for consumption by a human operator. With so many different camera views and hours of data, it is impossible to simply play the videos and expect a user to find potentially interesting segments in a timely manner.

Addressing this problem requires new approaches to Human Computer Interaction (HCI). This includes methods of displaying information when not limited to monitors on a desktop, as well as less-traditional input methods to allow more natural exploration of complex data. Our laboratory has a variety of equipment that could prove useful, including projectors, large-size monitors, eye trackers, and several Microsoft Kinect sensors that were recently purchased. The Kinect device contains a standard color video camera calibrated with a time of flight depth sensing camera which allows fast separation of a user from the background as well as accurate estimation of human pose, or how the body and limbs are orientated in space. The device also includes a microphone array that can be used with speech recognition and sound source localization. We would like to combine the output of several of these devices working together to create an interactive, gesture-driven display system tailored to the understanding of the video data coming from our large-scale camera network. As these devices are new to our laboratory, this should be an exciting project with room for experimentation and creativity.

 

 

 

2.  Sponsor: Special Technologies Laboratory (STL)

Student Team: Jeff Imamaru, Kevin Lee, Di Li

Mentors: Kirk Miller, Dale Turley

 

Project Description: System-on-a-Chip Color Processing

New CMOS sensors with built-in microprocessors could open up sophisticated RGB color processing to mobile, low-power vision systems and permit processing of 10-bit data before conversion to conventional 8-bit data for export via USB.  When processed properly, red, green and blue channels from a color sensor can reveal valuable information about an object's true color, or about the infrared light spectrum in the scene; commercial USB cameras, though, do not allow access to the on-chip functionality of these sensors.  A camera board that uses a low-power microcontroller and nonvolatile memory to configure a SOC camera chip, and to provide easy user access to the color-correction matrix, gamma correction, JPEG encoding and other on-chip functions would allow development of hardware and algorithms to exploit these ubiquitous sensors for unique applications.

 

 

 

3.  Sponsor: Solid State Lighting Services, Inc. (SSLS)

Student Team: David Cosenza, Seth Danielson, Alfredo Torres

Mentors: Morgan Pattison, Daniel Feezel

 

Project Description: Spectro-gonio-photometer

This would involve putting together a tool to collect angularly resolved optical power and spectral power density data for LED packages and replacement lamps.  These typically use a swing arm that rotates around the source and has a fiber optic to collect light connected to a spectrometer.  The tool should be large enough to accommodate sources as large as replacement lamps.  Optical distribution of light sources would also be collected by this type of system.  There could potentially be some market demand for this type of testing of sources.  There is also demand for other types of LED lighting testing that will be important such as reliability testing, failure analysis, total luminous flux, etc. and this could be an area of entrepreneurial activity.

 

 

 

4.  Sponsor: Solid State Lighting Services, Inc. (SSLS)

Student Team: Sidhant Bhargava, Ben Chang, Taylor Umphreys

Mentors: Morgan Pattison, Jim Honea

 

Project Description: Smart Phone Light Switch

 A “smart switch” could be put together to control lights and could be controlled by a smart phone and plugged in inline (like the 'clapper').  Such a product could enable app developers to come up with a huge range of tools for effectively controlling lighting - proximity switches, remote control of lighting, daylight sensors, etc.  This type of product could possibly remove the proprietary lighting control systems that are in use now but that nobody uses because they are too complicated and expensive.

 

 

 

5.  Sponsor: Prof Luke Theogarajan’s Research Group

Student Team: Laurel Hopkins, Bassel Ihsan, Taishi Kato

Mentor: Prof Luke Theogarajan

 

Project Description: Glucose Monitoring System

For diabetics, obtaining accurate blood glucose measurements is key in determining the appropriate insulin dose needed. Blood glucose readings can be altered by ambient temperature, altitude, and humidity. By employing the capabilities of smart phones such variables can easily be taken into account, providing a more accurate measurement. The goal of this project is to combine the functionality of Android smart phones with the convenience of in-home glucose testing. In addition to providing the blood glucose reading, an app will be created to store previous test values and will also be capable of emailing the results to the patient and/or doctor.

 

 

 

6.  Sponsor: Prof Forrest Brewer’s Research Group

Student Team: Alec Dibble, Daniel Kouba

Mentor: Prof Forrest Brewer

 

Project Description: Sigma Delta Embedded Reconfigurable Platform

Control Systems are becoming ubiquitous.  From ABS brakes to thermostats, we interact with them on a daily basis.  Unfortunately, the tools to develop these systems are slow, have high power requirements, and are cost-prohibitive for educational and research use.  Currently, it is common for dynamical system control to be taught using the Simulink® simulation and design suite running on a PC, and an external hardware interface is used to communicate with the devices under control.

The goal of the Sigma Delta Embedded Reconfigurable Platform (SDERP) is to provide a high speed, low power, and low cost platform for control and signal processing experiments.  The control algorithms are performed on a field programmable gate array (FPGA) that is interfaced to an ARM processor.  By using an FPGA as a controller, high speeds are possible because the FPGA is clocked significantly faster than the incoming data stream.  The ARM processor can dynamically reprogram the FPGA’s algorithms and log data off of it without modifying the loaded bitstream.  This framework is advantageous as it can be configured on-the-fly without requiring time-intensive logic synthesis for every experiment revision.  The processor provides a web interface that allows the user to set up their experiment from any internet-enabled device.  A MATLAB®/Simulink® interface will also be provided that would allow a user to write control algorithms on the computer and port them to the device.

One of the novel aspects of this project is that the FPGA hosts a custom made filter framework that processes a 1 bit, sigma delta encoded stream from the analog to digital converter (ADC).  This allows a large number of filters to be synthesized on the FPGA and also decreases the routing cost between the filters in the design.

The project will be completely open source.  All the code and printed circuit board files will be posted online to make it easy for other universities, companies, and hobbyists to experiment using the platform.  The ARM processor and FPGA reside on a single board computer (SBC) that can be inexpensively obtained.

 

---

Electrical and Computer Engineering || College of Engineering || Ilan Ben-Yaacov's Home Page