top of page
Search
  • Dror Margalit

Sensor Research: SparkFun Qwiic ToF Imager

By Mary Mark and Dror Margalit


The SparkFun Qwiic ToF Imager - VL53L5CX is a 64-pixel Time of Flight (ToF) sensor. With a measuring distance of up to 4 meters, this sensor provides an 8x8 or 4x4 grid of its distance from nearby objects.


Setting the sensor up

To get the sensor up and running, we connected it to 3.3v and ground and its SDA & SCL to the corresponding pins in Arduino (A4 and A5 in the Nano). Then we downloaded the SparkFun VL53L5CX Arduino Library and ran the FastStartup example. Doing that allowed us to get the sensor up and running, as it started producing the data of its distance (in millimeters) from nearby objects. As we experimented with it, we saw that it runs very slowly. We realized that its default sample rate was one second, so we changed it to a faster sample rate to get better results.



Visualizing

Once we had the data from the sensor, we could visualize it on p5. To do that, we organized the data so a comma separates each measurement, and after 64 measurements, there is a new line. Then, we sent this data to p5, which spliced it into an array of 64 measurements. This array was mapped and used to build a grid where each shape corresponds to the data from the sensor. When doing so, we had to ensure that the way the data is sent to p5 is the same way it is preserved and displayed. Otherwise, the image on the p5 canvas could have been flipped or unorganized.


One issue that we ran into when visualizing it is that once the sensor does not find any objects within its range, it does not know how to treat the data. Consequently, when we pointed it into a blank space, it would “remember” the last object that was within 4 meters and not change the visualization. To solve this, it might be worth either putting a background within the range or code in what it has to do once it is out of range.


Potential use

While we chose to visualize the data, we realized that the potential of such a sensor exceeds mere visualization. For example, we thought this sensor could be used for robot object identification or other physical computing art installations (e.g., Danny Roizin’s mirrors). Additionally, because the sensor can identify patterns, the sensor could also be used for more complex object identification using machine learning.


P5 Sketch (with Arduino code)

Recent Posts

Peer-to-peer nose interaction

By Joann Myung and Dror Margalit In this project, we attempted to increase our online presence and create a feeling of proximity. Using PoseNet, we made users control their position on the screen base

bottom of page